summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)Author
2025-11-20cpumask: Introduce cpumask_weighted_or()Thomas Gleixner
CID management OR's two cpumasks and then calculates the weight on the result. That's inefficient as that has to walk the same stuff twice. As this is done with runqueue lock held, there is a real benefit of speeding this up. Depending on the system this results in 10-20% less cycles spent with runqueue lock held for a 4K cpumask. Provide cpumask_weighted_or() and the corresponding bitmap functions which return the weight of the OR result right away. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Yury Norov (NVIDIA) <yury.norov@gmail.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://patch.msgid.link/20251119172549.448263340@linutronix.de
2025-11-19raid6: test: Add support for RISC-VChunyan Zhang
Add RISC-V code to be compiled to allow the userspace raid6test program to be built and run on RISC-V. Signed-off-by: Chunyan Zhang <zhang.lyra@gmail.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://patch.msgid.link/20250718072711.3865118-6-zhangchunyan@iscas.ac.cn Signed-off-by: Paul Walmsley <pjw@kernel.org>
2025-11-19raid6: riscv: Allow code to be compiled in userspaceChunyan Zhang
To support userspace raid6test, this patch adds __KERNEL__ ifdef for kernel header inclusions also userspace wrapper definitions to allow code to be compiled in userspace. This patch also drops the NSIZE macro, instead of using the vector length, which can work for both kernel and user space. Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Signed-off-by: Chunyan Zhang <zhangchunyan@iscas.ac.cn> Link: https://patch.msgid.link/20250718072711.3865118-5-zhangchunyan@iscas.ac.cn Signed-off-by: Paul Walmsley <pjw@kernel.org>
2025-11-19raid6: riscv: Prevent compiler from breaking inline vector assembly codeChunyan Zhang
To prevent the compiler from breaking the inline vector assembly code, this code must be built without compiler support for vector. Signed-off-by: Chunyan Zhang <zhangchunyan@iscas.ac.cn> Link: https://patch.msgid.link/20250718072711.3865118-4-zhangchunyan@iscas.ac.cn [pjw@kernel.org: cleaned up commit message] Signed-off-by: Paul Walmsley <pjw@kernel.org>
2025-11-19lib/vsprintf: Add specifier for printing struct timespec64Andy Shevchenko
A handful drivers want to print a content of the struct timespec64 in a format of %lld:%09ld. In order to make their lives easier, add the respecting specifier directly to the printf() implementation. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Tested-by: Petr Mladek <pmladek@suse.com> Link: https://patch.msgid.link/20251113150217.3030010-2-andriy.shevchenko@linux.intel.com Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-11-19lib/vsprintf: Deduplicate special hex number specifier dataAndy Shevchenko
Two functions use the same specifier data for the special hex number. Almost the same as the field width is calculated on the size of the given type. Due to that, make a compound literal macro in order to deduplicate the rest. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Tested-by: Petr Mladek <pmladek@suse.com> Link: https://patch.msgid.link/20251113150313.3030700-1-andriy.shevchenko@linux.intel.com Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-11-18lib/strn*,uaccess: Use masked_user_{read/write}_access_begin when requiredChristophe Leroy
Properly use masked_user_read_access_begin() and masked_user_write_access_begin() instead of masked_user_access_begin() in order to match user_read_access_end() and user_write_access_end(). This is important for architectures like PowerPC that enable separately user reads and user writes. That means masked_user_read_access_begin() is used when user memory is exclusively read during the window and masked_user_write_access_begin() is used when user memory is exclusively writen during the window. masked_user_access_begin() remains and is used when both reads and writes are performed during the open window. Each of them is expected to be terminated by the matching user_read_access_end(), user_write_access_end() and user_access_end(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/cb5e4b0fa49ea9c740570949d5e3544423389757.1763396724.git.christophe.leroy@csgroup.eu
2025-11-18iov_iter: Add missing speculation barrier to copy_from_user_iter()Christophe Leroy
The results of "access_ok()" can be mis-speculated. The result is that the CPU can end speculatively: if (access_ok(from, size)) // Right here For the same reason as done in copy_from_user() in commit 74e19ef0ff80 ("uaccess: Add speculation barrier to copy_from_user()"), add a speculation barrier to copy_from_user_iter(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/6b73e69cc7168c89df4eab0a216e3ed4cca36b0a.1763396724.git.christophe.leroy@csgroup.eu
2025-11-18iov_iter: Convert copy_from_user_iter() to masked user accessChristophe Leroy
copy_from_user_iter() lacks a speculation barrier, which will degrade performance on some architecture like x86, which would be unfortunate as copy_from_user_iter() is a critical hotpath function. Convert copy_from_user_iter() to using masked user access on architecture that support it. This allows to add the speculation barrier without impacting performance. This is similar to what was done for copy_from_user() in commit 0fc810ae3ae1 ("x86/uaccess: Avoid barrier_nospec() in 64-bit copy_from_user()") [ tglx: Massage change log ] Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/58e4b07d469ca68a2b9477fe2c1ccc8a44cef131.1763396724.git.christophe.leroy@csgroup.eu
2025-11-17string: provide strends()Bartosz Golaszewski
Implement a function for checking if a string ends with a different string and add its kunit test cases. Acked-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20251112-gpio-shared-v4-1-b51f97b1abd8@linaro.org Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
2025-11-16lib/test_vmalloc: remove xfail condition checkUladzislau Rezki (Sony)
A test marked with "xfail = true" is expected to fail but that does not mean it is predetermined to fail. Remove "xfail" condition check for tests which pass successfully. Link: https://lkml.kernel.org/r/20251007122035.56347-3-urezki@gmail.com Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Baoquan He <bhe@redhat.com> Cc: Marco Elver <elver@google.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-16lib/test_vmalloc: add no_block_alloc_test caseUladzislau Rezki (Sony)
Patch series "__vmalloc()/kvmalloc() and no-block support", v4. This patch (of 10): Introduce a new test case "no_block_alloc_test" that verifies non-blocking allocations using __vmalloc() with GFP_ATOMIC and GFP_NOWAIT flags. It is recommended to build kernel with CONFIG_DEBUG_ATOMIC_SLEEP enabled to help catch "sleeping while atomic" issues. This test ensures that memory allocation logic under atomic constraints does not inadvertently sleep. Link: https://lkml.kernel.org/r/20251007122035.56347-2-urezki@gmail.com Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Cc: Baoquan He <bhe@redhat.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Marco Elver <elver@google.com> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-15lib/test_kho: check if KHO is enabledPasha Tatashin
We must check whether KHO is enabled prior to issuing KHO commands, otherwise KHO internal data structures are not initialized. Link: https://lkml.kernel.org/r/20251106220635.2608494-1-pasha.tatashin@soleen.com Fixes: b753522bed0b ("kho: add test for kexec handover") Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com> Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202511061629.e242724-lkp@intel.com Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Cc: Alexander Graf <graf@amazon.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-14Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf after 6.18-rc5+Alexei Starovoitov
Cross-merge BPF and other fixes after downstream PR. Minor conflict in kernel/bpf/helpers.c Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-11-14kunit: Make filter parameters configurable via KconfigThomas Weißschuh
Enable the preset of filter parameters from kconfig options, similar to how other KUnit configuration parameters are handled already. This is useful to run a subset of tests even if the cmdline is not readily modifyable. Link: https://lore.kernel.org/r/20251106-kunit-filter-kconfig-v1-1-d723fb7ac221@linutronix.de Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2025-11-13Merge tag 'v6.18-rc5' into objtool/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-11-12Merge tag 'arm64-fpsimd-on-stack-for-v6.19' into libcrypto-fpsimd-on-stackEric Biggers
Pull fpsimd-on-stack changes from Ard Biesheuvel: "Shared tag/branch for arm64 FP/SIMD changes going through libcrypto" Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-12lib/crypto: arm64: Move remaining algorithms to scoped ksimd APIArd Biesheuvel
Move the arm64 implementations of SHA-3 and POLYVAL to the newly introduced scoped ksimd API, which replaces kernel_neon_begin() and kernel_neon_end(). On arm64, this is needed because the latter API will change in an incompatible manner. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-12lib/xxhash: remove more unused xxh functionsDr. David Alan Gilbert
xxh32_reset() and xxh32_copy_state() are unused, and with those gone, the xxh32_state struct is also unused. xxh64_copy_state() is also unused. Remove them all. (Also fixes a comment above the xxh64_state that referred to it as xxh32_state). Link: https://lkml.kernel.org/r/20251024205120.454508-1-linux@treblig.org Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Suggested-by: Christoph Hellwig <hch@infradead.org> Reviewed-by: Kuan-Wei Chiu <visitorckw@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-12dynamic_debug: add support for print stackYe Bin
In practical problem diagnosis, especially during the boot phase, it is often desirable to know the call sequence. However, currently, apart from adding print statements and recompiling the kernel, there seems to be no good alternative. If dynamic_debug supported printing the call stack, it would be very helpful for diagnosing issues. This patch add support '+d' for dump stack. Link: https://lkml.kernel.org/r/20251025080003.312536-1-yebin@huaweicloud.com Signed-off-by: Ye Bin <yebin10@huawei.com> Cc: Jason Baron <jbaron@akamai.com> Cc: Jim Cromie <jim.cromie@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-12uaccess: decouple INLINE_COPY_FROM_USER and CONFIG_RUSTYury Norov (NVIDIA)
Commit 1f9a8286bc0c ("uaccess: always export _copy_[from|to]_user with CONFIG_RUST") exports _copy_{from,to}_user() unconditionally, if RUST is enabled. This pollutes exported symbols namespace, and spreads RUST ifdefery in core files. It's better to declare a corresponding helper under the rust/helpers, similarly to how non-underscored copy_{from,to}_user() is handled. [yury.norov@gmail.com: drop rust part of comment for _copy_from_user(), per Alice] Link: https://lkml.kernel.org/r/20251024154754.99768-1-yury.norov@gmail.com Link: https://lkml.kernel.org/r/20251023171607.1171534-1-yury.norov@gmail.com Signed-off-by: Yury Norov (NVIDIA) <yury.norov@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Miguel Ojeda <ojeda@kernel.org> Reviewed-by: Alice Ryhl <aliceryhl@google.com> Tested-by: Alice Ryhl <aliceryhl@google.com> Cc: Alex Gaynor <alex.gaynor@gmail.com> Cc: Andreas Hindborg <a.hindborg@kernel.org> Cc: Björn Roy Baron <bjorn3_gh@protonmail.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Danilo Krummrich <dakr@kernel.org> Cc: Gary Guo <gary@garyguo.net> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Trevor Gross <tmgross@umich.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-12lib/xz: remove dead IA-64 (Itanium) support codeAnkan Biswas
Support for the IA-64 (Itanium) architecture was removed in commit cf8e8658100d ("arch: Remove Itanium (IA-64) architecture"). This patch drops the IA-64 specific decompression code from lib/xz, which was conditionally compiled with the now-obsolete CONFIG_XZ_DEC_IA64 option. Link: https://lkml.kernel.org/r/20251014052738.31185-1-spyjetfayed@gmail.com Signed-off-by: Ankan Biswas <spyjetfayed@gmail.com> Reviewed-by: Kuan-Wei Chiu <visitorckw@gmail.com> Reviewed-by: Khalid Aziz <khalid@kernel.org> Acked-by: Lasse Collin <lasse.collin@tukaani.org> Cc: David Hunter <david.hunter.linux@gmail.com> Cc: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-12hung_task: panic when there are more than N hung tasks at the same timeLi RongQing
The hung_task_panic sysctl is currently a blunt instrument: it's all or nothing. Panicking on a single hung task can be an overreaction to a transient glitch. A more reliable indicator of a systemic problem is when multiple tasks hang simultaneously. Extend hung_task_panic to accept an integer threshold, allowing the kernel to panic only when N hung tasks are detected in a single scan. This provides finer control to distinguish between isolated incidents and system-wide failures. The accepted values are: - 0: Don't panic (unchanged) - 1: Panic on the first hung task (unchanged) - N > 1: Panic after N hung tasks are detected in a single scan The original behavior is preserved for values 0 and 1, maintaining full backward compatibility. [lance.yang@linux.dev: new changelog] Link: https://lkml.kernel.org/r/20251015063615.2632-1-lirongqing@baidu.com Signed-off-by: Li RongQing <lirongqing@baidu.com> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Reviewed-by: Lance Yang <lance.yang@linux.dev> Tested-by: Lance Yang <lance.yang@linux.dev> Acked-by: Andrew Jeffery <andrew@codeconstruct.com.au> [aspeed_g5_defconfig] Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Hildenbrand <david@redhat.com> Cc: Florian Wesphal <fw@strlen.de> Cc: Jakub Kacinski <kuba@kernel.org> Cc: Jason A. Donenfeld <jason@zx2c4.com> Cc: Joel Granados <joel.granados@kernel.org> Cc: Joel Stanley <joel@jms.id.au> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kees Cook <kees@kernel.org> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: "Paul E . McKenney" <paulmck@kernel.org> Cc: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Cc: Petr Mladek <pmladek@suse.com> Cc: Phil Auld <pauld@redhat.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Shuah Khan <shuah@kernel.org> Cc: Simon Horman <horms@kernel.org> Cc: Stanislav Fomichev <sdf@fomichev.me> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-12treewide: drop outdated compiler version remarks in Kconfig help textsLukas Bulwahn
As of writing, Documentation/Changes states the minimal versions of GNU C being 8.1, Clang being 15.0.0 and binutils being 2.30. A few Kconfig help texts are pointing out that specific GCC and Clang versions are needed, but by now, those pointers to versions, such later than 4.0, later than 4.4, or clang later than 5.0, are obsolete and unlikely to be found by users configuring their kernel builds anyway. Drop these outdated remarks in Kconfig help texts referring to older compiler and binutils versions. No functional change. Link: https://lkml.kernel.org/r/20251010082138.185752-1-lukas.bulwahn@redhat.com Signed-off-by: Lukas Bulwahn <lukas.bulwahn@redhat.com> Cc: Bill Wendling <morbo@google.com> Cc: Justin Stitt <justinstitt@google.com> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Russel King <linux@armlinux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-12lib/crypto: arm/blake2b: Move to scoped ksimd APIArd Biesheuvel
Even though ARM's versions of kernel_neon_begin()/_end() are not being changed, update the newly migrated ARM blake2b to the scoped ksimd API so that all ARM and arm64 in lib/crypto remains consistent in this manner. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-12Merge tag 'scoped-ksimd-for-arm-arm64' into libcrypto-fpsimd-on-stackEric Biggers
Pull scoped ksimd API for ARM and arm64 from Ard Biesheuvel: "Introduce a more strict replacement API for kernel_neon_begin()/kernel_neon_end() on both ARM and arm64, and replace occurrences of the latter pair appearing in lib/crypto" Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-12lib/vsprintf: Check pointer before dereferencing in time_and_date()Andy Shevchenko
The pointer may be invalid when gets to the printf(). In particular the time_and_date() dereferencing it in some cases without checking. Move the check from rtc_str() to time_and_date() to cover all cases. Fixes: 7daac5b2fdf8 ("lib/vsprintf: Print time64_t in human readable format") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Link: https://patch.msgid.link/20251110132118.4113976-1-andriy.shevchenko@linux.intel.com Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-11-12raid6: Move to more abstract 'ksimd' guard APIArd Biesheuvel
Move away from calling kernel_neon_begin() and kernel_neon_end() directly, and instead, use the newly introduced scoped_ksimd() API. This permits arm64 to modify the kernel mode NEON API without affecting code that is shared between ARM and arm64. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-12lib/crc: Switch ARM and arm64 to 'ksimd' scoped guard APIArd Biesheuvel
Before modifying the prototypes of kernel_neon_begin() and kernel_neon_end() to accommodate kernel mode FP/SIMD state buffers allocated on the stack, move arm64 to the new 'ksimd' scoped guard API, which encapsulates the calls to those functions. For symmetry, do the same for 32-bit ARM too. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-12lib/crypto: Switch ARM and arm64 to 'ksimd' scoped guard APIArd Biesheuvel
Before modifying the prototypes of kernel_neon_begin() and kernel_neon_end() to accommodate kernel mode FP/SIMD state buffers allocated on the stack, move arm64 to the new 'ksimd' scoped guard API, which encapsulates the calls to those functions. For symmetry, do the same for 32-bit ARM too. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2025-11-11lib/crypto: tests: Add KUnit tests for POLYVALEric Biggers
Add a test suite for the POLYVAL library, including: - All the standard tests and the benchmark from hash-test-template.h - Comparison with a test vector from the RFC - Test with key and message containing all one bits - Additional tests related to the key struct Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251109234726.638437-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-11lib/crypto: tests: Add additional SHAKE testsEric Biggers
Add the following test cases to cover gaps in the SHAKE testing: - test_shake_all_lens_up_to_4096() - test_shake_multiple_squeezes() - test_shake_with_guarded_bufs() Remove test_shake256_tiling() and test_shake256_tiling2() since they are superseded by test_shake_multiple_squeezes(). It provides better test coverage by using randomized testing. E.g., it's able to generate a zero-length squeeze followed by a nonzero-length squeeze, which the first 7 versions of the SHA-3 patchset handled incorrectly. Tested-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251026055032.1413733-7-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-11lib/crypto: tests: Add SHA3 kunit testsDavid Howells
Add a SHA3 kunit test suite, providing the following: (*) A simple test of each of SHA3-224, SHA3-256, SHA3-384, SHA3-512, SHAKE128 and SHAKE256. (*) NIST 0- and 1600-bit test vectors for SHAKE128 and SHAKE256. (*) Output tiling (multiple squeezing) tests for SHAKE256. (*) Standard hash template test for SHA3-256. To make this possible, gen-hash-testvecs.py is modified to support sha3-256. (*) Standard benchmark test for SHA3-256. [EB: dropped some unnecessary changes to gen-hash-testvecs.py, moved addition of Testing section in doc file into this commit, and other small cleanups] Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Harald Freudenberger <freude@linux.ibm.com> Link: https://lore.kernel.org/r/20251026055032.1413733-6-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-11lib/crypto: tests: Add KUnit tests for BLAKE2bEric Biggers
Add a KUnit test suite for the BLAKE2b library API, mirroring the BLAKE2s test suite very closely. As with the BLAKE2s test suite, a benchmark is included. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251018043106.375964-9-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-11lib/crypto: x86/polyval: Migrate optimized code into libraryEric Biggers
Migrate the x86_64 implementation of POLYVAL into lib/crypto/, wiring it up to the POLYVAL library interface. This makes the POLYVAL library be properly optimized on x86_64. This drops the x86_64 optimizations of polyval in the crypto_shash API. That's fine, since polyval will be removed from crypto_shash entirely since it is unneeded there. But even if it comes back, the crypto_shash API could just be implemented on top of the library API, as usual. Adjust the names and prototypes of the assembly functions to align more closely with the rest of the library code. Also replace a movaps instruction with movups to remove the assumption that the key struct is 16-byte aligned. Users can still align the key if they want (and at least in this case, movups is just as fast as movaps), but it's inconvenient to require it. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251109234726.638437-6-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-11lib/crypto: arm64/polyval: Migrate optimized code into libraryEric Biggers
Migrate the arm64 implementation of POLYVAL into lib/crypto/, wiring it up to the POLYVAL library interface. This makes the POLYVAL library be properly optimized on arm64. This drops the arm64 optimizations of polyval in the crypto_shash API. That's fine, since polyval will be removed from crypto_shash entirely since it is unneeded there. But even if it comes back, the crypto_shash API could just be implemented on top of the library API, as usual. Adjust the names and prototypes of the assembly functions to align more closely with the rest of the library code. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251109234726.638437-5-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-11lib/crypto: polyval: Add POLYVAL libraryEric Biggers
Add support for POLYVAL to lib/crypto/. This will replace the polyval crypto_shash algorithm and its use in the hctr2 template, simplifying the code and reducing overhead. Specifically, this commit introduces the POLYVAL library API and a generic implementation of it. Later commits will migrate the existing architecture-optimized implementations of POLYVAL into lib/crypto/ and add a KUnit test suite. I've also rewritten the generic implementation completely, using a more modern approach instead of the traditional table-based approach. It's now constant-time, requires no precomputation or dynamic memory allocations, decreases the per-key memory usage from 4096 bytes to 16 bytes, and is faster than the old polyval-generic even on bulk data reusing the same key (at least on x86_64, where I measured 15% faster). We should do this for GHASH too, but for now just do it for POLYVAL. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251109234726.638437-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-11lib/test_fprobe: add testcase for mixed fprobeMenglong Dong
Add the testcase for the fprobe, which will hook the same target with two fprobe: entry, entry+exit. And the two fprobes will be registered with different order. fgraph and ftrace are both used for the fprobe, and this testcase is for the mixed situation. Link: https://lore.kernel.org/all/20251015083238.2374294-3-dongml2@chinatelecom.cn/ Signed-off-by: Menglong Dong <dongml2@chinatelecom.cn> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
2025-11-09maple_tree: fix tracepoint string pointersMartin Kaiser
maple_tree tracepoints contain pointers to function names. Such a pointer is saved when a tracepoint logs an event. There's no guarantee that it's still valid when the event is parsed later and the pointer is dereferenced. The kernel warns about these unsafe pointers. event 'ma_read' has unsafe pointer field 'fn' WARNING: kernel/trace/trace.c:3779 at ignore_event+0x1da/0x1e4 Mark the function names as tracepoint_string() to fix the events. One case that doesn't work without my patch would be trace-cmd record to save the binary ringbuffer and trace-cmd report to parse it in userspace. The address of __func__ can't be dereferenced from userspace but tracepoint_string will add an entry to /sys/kernel/tracing/printk_formats Link: https://lkml.kernel.org/r/20251030155537.87972-1-martin@kaiser.cx Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Martin Kaiser <martin@kaiser.cx> Acked-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-06Merge tag 'libcrypto-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux Pull crypto library fixes from Eric Biggers: "Two Curve25519 related fixes: - Re-enable KASAN support on curve25519-hacl64.c with gcc. - Disable the arm optimized Curve25519 code on CPU_BIG_ENDIAN kernels. It has always been broken in that configuration" * tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: lib/crypto: arm/curve25519: Disable on CPU_BIG_ENDIAN lib/crypto: curve25519-hacl64: Fix older clang KASAN workaround for GCC
2025-11-06bitops: Update kernel-doc in hweight.c to fix the issues with itAndy Shevchenko
The kernel-doc in lib/hweight.c is global to the file and currently has issues: Warning: lib/hweight.c:13 expecting prototype for hweightN(). Prototype was for __sw_hweight32() instead Warning: lib/hweight.c:13 function parameter 'w' not described in '__sw_hweight32' Update it accordingly. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Yury Norov (NVIDIA) <yury.norov@gmail.com>
2025-11-05lib/crypto: x86/blake2s: Use vpternlogd for 3-input XORsEric Biggers
AVX-512 supports 3-input XORs via the vpternlogd (or vpternlogq) instruction with immediate 0x96. This approach, vs. the alternative of two vpxor instructions, is already used in the CRC, AES-GCM, and AES-XTS code, since it reduces the instruction count and is faster on some CPUs. Make blake2s_compress_avx512() take advantage of it too. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251102234209.62133-7-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-05lib/crypto: x86/blake2s: Avoid writing back unchanged 'f' valueEric Biggers
Just before returning, blake2s_compress_ssse3() and blake2s_compress_avx512() store updated values to the 'h', 't', and 'f' fields of struct blake2s_ctx. But 'f' is always unchanged (which is correct; only the C code changes it). So, there's no need to write to 'f'. Use 64-bit stores (movq and vmovq) instead of 128-bit stores (movdqu and vmovdqu) so that only 't' is written. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251102234209.62133-6-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-05lib/crypto: x86/blake2s: Improve readabilityEric Biggers
Various cleanups for readability. No change to the generated code: - Add some comments - Add #defines for arguments - Rename some labels - Use decimal constants instead of hex where it makes sense. (The pshufd immediates intentionally remain as hex.) - Add blank lines when there's a logical break The round loop still could use some work, but this is at least a start. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251102234209.62133-5-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-05lib/crypto: x86/blake2s: Use local labels for dataEric Biggers
Following the usual practice, prefix the names of the data labels with ".L" so that the assembler treats them as truly local. This more clearly expresses the intent and is less error-prone. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251102234209.62133-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-05lib/crypto: x86/blake2s: Drop check for nblocks == 0Eric Biggers
Since blake2s_compress() is always passed nblocks != 0, remove the unnecessary check for nblocks == 0 from blake2s_compress_ssse3(). Note that this makes it consistent with blake2s_compress_avx512() in the same file as well as the arm32 blake2s_compress(). Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251102234209.62133-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-05lib/crypto: x86/blake2s: Fix 32-bit arg treated as 64-bitEric Biggers
In the C code, the 'inc' argument to the assembly functions blake2s_compress_ssse3() and blake2s_compress_avx512() is declared with type u32, matching blake2s_compress(). The assembly code then reads it from the 64-bit %rcx. However, the ABI doesn't guarantee zero-extension to 64 bits, nor do gcc or clang guarantee it. Therefore, fix these functions to read this argument from the 32-bit %ecx. In theory, this bug could have caused the wrong 'inc' value to be used, causing incorrect BLAKE2s hashes. In practice, probably not: I've fixed essentially this same bug in many other assembly files too, but there's never been a real report of it having caused a problem. In x86_64, all writes to 32-bit registers are zero-extended to 64 bits. That results in zero-extension in nearly all situations. I've only been able to demonstrate a lack of zero-extension with a somewhat contrived example involving truncation, e.g. when the C code has a u64 variable holding 0x1234567800000040 and passes it as a u32 expecting it to be truncated to 0x40 (64). But that's not what the real code does, of course. Fixes: ed0356eda153 ("crypto: blake2s - x86_64 SIMD implementation") Cc: stable@vger.kernel.org Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251102234209.62133-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-05lib/crypto: arm, arm64: Drop filenames from file commentsEric Biggers
Remove self-references to filenames from assembly files in lib/crypto/arm/ and lib/crypto/arm64/. This follows the recommended practice and eliminates an outdated reference to sha2-ce-core.S. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251102014809.170713-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-05lib/crypto: arm/blake2s: Fix some commentsEric Biggers
Fix the indices in some comments in blake2s-core.S. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251102021553.176587-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-05lib/crypto: s390/sha3: Add optimized one-shot SHA-3 digest functionsEric Biggers
Some z/Architecture processors can compute a SHA-3 digest in a single instruction. arch/s390/crypto/ already uses this capability to optimize the SHA-3 crypto_shash algorithms. Use this capability to implement the sha3_224(), sha3_256(), sha3_384(), and sha3_512() library functions too. SHA3-256 benchmark results provided by Harald Freudenberger (https://lore.kernel.org/r/4188d18bfcc8a64941c5ebd8de10ede2@linux.ibm.com/) on a z/Architecture machine with "facility 86" (MSA level 12): Length (bytes) Before (MB/s) After (MB/s) ============== ============= ============ 16 212 225 64 820 915 256 1850 3350 1024 5400 8300 4096 11200 11300 Note: the original data from Harald was given in the form of a graph for each length, showing the distribution of throughputs from 500 runs. I guesstimated the peak of each one. Harald also reported that the generic SHA-3 code was at most 259 MB/s (https://lore.kernel.org/r/c39f6b6c110def0095e5da5becc12085@linux.ibm.com/). So as expected, the earlier commit that optimized sha3_absorb_blocks() and sha3_keccakf() is the more important one; it optimized the Keccak permutation which is the most performance-critical part of SHA-3. Still, this additional commit does notably improve performance further on some lengths. Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Harald Freudenberger <freude@linux.ibm.com> Link: https://lore.kernel.org/r/20251026055032.1413733-13-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>