summaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)Author
2025-11-23lib/crypto: chacha20poly1305: Statically check fixed array lengthsJason A. Donenfeld
Several parameters of the chacha20poly1305 functions require arrays of an exact length. Use the new at_least keyword to instruct gcc and clang to statically check that the caller is passing an object of at least that length. Here it is in action, with this faulty patch to wireguard's cookie.h: struct cookie_checker { u8 secret[NOISE_HASH_LEN]; - u8 cookie_encryption_key[NOISE_SYMMETRIC_KEY_LEN]; + u8 cookie_encryption_key[NOISE_SYMMETRIC_KEY_LEN - 1]; u8 message_mac1_key[NOISE_SYMMETRIC_KEY_LEN]; If I try compiling this code, I get this helpful warning: CC drivers/net/wireguard/cookie.o drivers/net/wireguard/cookie.c: In function ‘wg_cookie_message_create’: drivers/net/wireguard/cookie.c:193:9: warning: ‘xchacha20poly1305_encrypt’ reading 32 bytes from a region of size 31 [-Wstringop-overread] 193 | xchacha20poly1305_encrypt(dst->encrypted_cookie, cookie, COOKIE_LEN, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 194 | macs->mac1, COOKIE_LEN, dst->nonce, | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 195 | checker->cookie_encryption_key); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/net/wireguard/cookie.c:193:9: note: referencing argument 7 of type ‘const u8 *’ {aka ‘const unsigned char *’} In file included from drivers/net/wireguard/messages.h:10, from drivers/net/wireguard/cookie.h:9, from drivers/net/wireguard/cookie.c:6: include/crypto/chacha20poly1305.h:28:6: note: in a call to function ‘xchacha20poly1305_encrypt’ 28 | void xchacha20poly1305_encrypt(u8 *dst, const u8 *src, const size_t src_len, Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: "Jason A. Donenfeld" <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20251123054819.2371989-4-Jason@zx2c4.com Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-22Merge tag 'libcrypto-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux Pull crypto library fix from Eric Biggers: "Fix another KMSAN warning that made it in while KMSAN wasn't working reliably" * tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: lib/crypto: tests: Fix KMSAN warning in test_sha256_finup_2x()
2025-11-21lib: Support ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGIONYicong Yang
ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION provides the mechanism for invalidating certain memory regions in a cache-incoherent manner. Currently this is used by NVDIMM and CXL memory drivers in cases where it is necessary to flush all data from caches by physical address range. The operations in question are effectively memory hotplug, where stale data might otherwise remain in the caches. This is separate from the invalidates done to enable use of non-coherent DMA masters, primarily in terms of when it is needed (not related to DMA mappings) and how deep the flush must push data. The flushes done for non-coherent DMA only need to reach the Point of Coherence of a single host (which is often nearer CPUs and DMA masters than the physical storage). This operation must push the data out of non architectural caches (memory-side caches, write buffers etc) and typically all the way to the memory device. In some architectures these operations are supported by system components that may become available only later in boot as they are either present on a discoverable bus, or via a firmware description of an MMIO interface (e.g. ACPI DSDT). Provide a framework to handle this case. Architectures can opt in for this support via CONFIG_GENERIC_CPU_CACHE_MAINTENANCE Add a registration framework. Each driver provides an ops structure and the first op is Write Back and Invalidate by PA Range. The driver may over invalidate. For systems that can perform this operation asynchronously an optional completion check operation is also provided. If present that must be called to ensure that the action has finished. This provides a considerable performance advantage if multiple agents are involved in the maintenance operation. When multiple agents are present in the system each should register with this framework and the core code will issue the invalidate to all of them before checking for completion on each. This is done to avoid need for filtering in the core code which can become complex when interleave, potentially across different cache coherency hardware is going on, so it is easier to tell everyone and let those who don't care do nothing. Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Co-developed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Conor Dooley <conor.dooley@microchip.com> Signed-off-by: Conor Dooley <conor.dooley@microchip.com>
2025-11-21lib/crypto: tests: Fix KMSAN warning in test_sha256_finup_2x()Eric Biggers
Fully initialize *ctx, including the buf field which sha256_init() doesn't initialize, to avoid a KMSAN warning when comparing *ctx to orig_ctx. This KMSAN warning slipped in while KMSAN was not working reliably due to a stackdepot bug, which has now been fixed. Fixes: 6733968be7cb ("lib/crypto: tests: Add tests and benchmark for sha256_finup_2x()") Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251121033431.34406-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-11-21bug: Add report_bug_entry()Peter Zijlstra
Add a report_bug() variant where the bug_entry is already known. This is useful when the exception instruction is not instantiated per-site. But instead has a single instance. In such a case the bug_entry address might be passed along in a known register or something. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20251110115757.575795595@infradead.org
2025-11-21bug: Add BUG_FORMAT_ARGS infrastructurePeter Zijlstra
Add BUG_FORMAT_ARGS; when an architecture is able to provide a va_list given pt_regs, use this to print format arguments. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20251110115757.457339417@infradead.org
2025-11-21bug: Add BUG_FORMAT infrastructurePeter Zijlstra
Add BUG_FORMAT; an architecture opt-in feature that allows adding the WARN_printf() format string to the bug_entry table. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20251110115757.223371452@infradead.org
2025-11-21Merge branch 'objtool/core'Peter Zijlstra
Bring in the UDB and objtool data annotations to avoid conflicts while further extending the bug exceptions. Signed-off-by: Peter Zijlstra <peterz@infradead.org>
2025-11-20lib: ratelimit: fix spelling mistake 'seperately'Chia-Liang Wang
Corrects a spelling mistake in a comment in ratelimit.c where 'seperately' was used instead of 'separately'. Link: https://lkml.kernel.org/r/20251119101144.3175-1-a0979625527@icloud.com Signed-off-by: Chia-Liang Wang <a0979652527@icloud.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib: add KUnit tests for base64 encoding/decodingGuan-Chun Wu
Add a KUnit test suite to validate the base64 helpers. The tests cover both encoding and decoding, including padded and unpadded forms as defined by RFC 4648 (standard base64), and add negative cases for malformed inputs and padding errors. The test suite also validates other variants (URLSAFE, IMAP) to ensure their correctness. In addition to functional checks, the suite includes simple microbenchmarks which report average encode/decode latency for small (64B) and larger (1KB) inputs. These numbers are informational only and do not gate the tests. Kconfig (BASE64_KUNIT) and lib/tests/Makefile are updated accordingly. Sample KUnit output: KTAP version 1 # Subtest: base64 # module: base64_kunit 1..4 # base64_performance_tests: [64B] encode run : 32ns # base64_performance_tests: [64B] decode run : 35ns # base64_performance_tests: [1KB] encode run : 510ns # base64_performance_tests: [1KB] decode run : 530ns ok 1 base64_performance_tests ok 2 base64_std_encode_tests ok 3 base64_std_decode_tests ok 4 base64_variant_tests # base64: pass:4 fail:0 skip:0 total:4 # Totals: pass:4 fail:0 skip:0 total:4 Link: https://lkml.kernel.org/r/20251114060157.89507-1-409411716@gms.tku.edu.tw Signed-off-by: Guan-Chun Wu <409411716@gms.tku.edu.tw> Reviewed-by: Kuan-Wei Chiu <visitorckw@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Cc: David Laight <david.laight.linux@gmail.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: Jaegeuk Kim <jaegeuk@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Keith Busch <kbusch@kernel.org> Cc: Sagi Grimberg <sagi@grimberg.me> Cc: "Theodore Y. Ts'o" <tytso@mit.edu> Cc: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com> Cc: Xiubo Li <xiubli@redhat.com> Cc: Yu-Sheng Huang <home7438072@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib/base64: rework encode/decode for speed and stricter validationGuan-Chun Wu
The old base64 implementation relied on a bit-accumulator loop, which was slow for larger inputs and too permissive in validation. It would accept extra '=', missing '=', or even '=' appearing in the middle of the input, allowing malformed strings to pass. This patch reworks the internals to improve performance and enforce stricter validation. Changes: - Encoder: * Process input in 3-byte blocks, mapping 24 bits into four 6-bit symbols, avoiding bit-by-bit shifting and reducing loop iterations. * Handle the final 1-2 leftover bytes explicitly and emit '=' only when requested. - Decoder: * Based on the reverse lookup tables from the previous patch, decode input in 4-character groups. * Each group is looked up directly, converted into numeric values, and combined into 3 output bytes. * Explicitly handle padded and unpadded forms: - With padding: input length must be a multiple of 4, and '=' is allowed only in the last two positions. Reject stray or early '='. - Without padding: validate tail lengths (2 or 3 chars) and require unused low bits to be zero. * Removed the bit-accumulator style loop to reduce loop iterations. Performance (x86_64, Intel Core i7-10700 @ 2.90GHz, avg over 1000 runs, KUnit): Encode: 64B ~90ns -> ~32ns (~2.8x) 1KB ~1332ns -> ~510ns (~2.6x) Decode: 64B ~1530ns -> ~35ns (~43.7x) 1KB ~27726ns -> ~530ns (~52.3x) [akpm@linux-foundation.org: remove u32 casts, per David and Guan-Chun] Link: https://lkml.kernel.org/r/20251114060132.89279-1-409411716@gms.tku.edu.tw Co-developed-by: Kuan-Wei Chiu <visitorckw@gmail.com> Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com> Co-developed-by: Yu-Sheng Huang <home7438072@gmail.com> Signed-off-by: Yu-Sheng Huang <home7438072@gmail.com> Signed-off-by: Guan-Chun Wu <409411716@gms.tku.edu.tw> Reviewed-by: David Laight <david.laight.linux@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: Jaegeuk Kim <jaegeuk@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Keith Busch <kbusch@kernel.org> Cc: Sagi Grimberg <sagi@grimberg.me> Cc: "Theodore Y. Ts'o" <tytso@mit.edu> Cc: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com> Cc: Xiubo Li <xiubli@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib/base64: optimize base64_decode() with reverse lookup tablesKuan-Wei Chiu
Replace the use of strchr() in base64_decode() with precomputed reverse lookup tables for each variant. This avoids repeated string scans and improves performance. Use -1 in the tables to mark invalid characters. Decode: 64B ~1530ns -> ~80ns (~19.1x) 1KB ~27726ns -> ~1239ns (~22.4x) [akpm@linux-foundation.org: fix kernedoc] Link: https://lkml.kernel.org/r/20251114060107.89026-1-409411716@gms.tku.edu.tw Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com> Co-developed-by: Guan-Chun Wu <409411716@gms.tku.edu.tw> Signed-off-by: Guan-Chun Wu <409411716@gms.tku.edu.tw> Reviewed-by: David Laight <david.laight.linux@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: Jaegeuk Kim <jaegeuk@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Keith Busch <kbusch@kernel.org> Cc: Sagi Grimberg <sagi@grimberg.me> Cc: "Theodore Y. Ts'o" <tytso@mit.edu> Cc: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com> Cc: Xiubo Li <xiubli@redhat.com> Cc: Yu-Sheng Huang <home7438072@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib/base64: add support for multiple variantsKuan-Wei Chiu
Patch series " lib/base64: add generic encoder/decoder, migrate users", v5. This series introduces a generic Base64 encoder/decoder to the kernel library, eliminating duplicated implementations and delivering significant performance improvements. The Base64 API has been extended to support multiple variants (Standard, URL-safe, and IMAP) as defined in RFC 4648 and RFC 3501. The API now takes a variant parameter and an option to control padding. As part of this series, users are migrated to the new interface while preserving their specific formats: fscrypt now uses BASE64_URLSAFE, Ceph uses BASE64_IMAP, and NVMe is updated to BASE64_STD. On the encoder side, the implementation processes input in 3-byte blocks, mapping 24 bits directly to 4 output symbols. This avoids bit-by-bit streaming and reduces loop overhead, achieving about a 2.7x speedup compared to previous implementations. On the decoder side, replace strchr() lookups with per-variant reverse tables and process input in 4-character groups. Each group is mapped to numeric values and combined into 3 bytes. Padded and unpadded forms are validated explicitly, rejecting invalid '=' usage and enforcing tail rules. This improves throughput by ~43-52x. This patch (of 6): Extend the base64 API to support multiple variants (standard, URL-safe, and IMAP) as defined in RFC 4648 and RFC 3501. The API now takes a variant parameter and an option to control padding. Update NVMe auth code to use the new interface with BASE64_STD. Link: https://lkml.kernel.org/r/20251114055829.87814-1-409411716@gms.tku.edu.tw Link: https://lkml.kernel.org/r/20251114060045.88792-1-409411716@gms.tku.edu.tw Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com> Co-developed-by: Guan-Chun Wu <409411716@gms.tku.edu.tw> Signed-off-by: Guan-Chun Wu <409411716@gms.tku.edu.tw> Reviewed-by: David Laight <david.laight.linux@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: Jaegeuk Kim <jaegeuk@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Keith Busch <kbusch@kernel.org> Cc: Sagi Grimberg <sagi@grimberg.me> Cc: "Theodore Y. Ts'o" <tytso@mit.edu> Cc: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com> Cc: Xiubo Li <xiubli@redhat.com> Cc: Yu-Sheng Huang <home7438072@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20sys_info: add a default kernel sys_info maskFeng Tang
Which serves as a global default sys_info mask. When users want the same system information for many error cases (panic, hung, lockup ...), they can chose to set this global knob only once, while not setting up each individual sys_info knobs. This just adds a 'lazy' option, and doesn't change existing kernel behavior as the mask is 0 by default. Link: https://lkml.kernel.org/r/20251113111039.22701-5-feng.tang@linux.alibaba.com Suggested-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Feng Tang <feng.tang@linux.alibaba.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Lance Yang <ioworker0@gmail.com> Cc: "Paul E . McKenney" <paulmck@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20Revert "lib/plist.c: enforce memory ordering in plist_check_list"Kuan-Wei Chiu
This reverts commit 7abcb84f953df037d40fad66f2109db318dd155b. The introduction of WRITE_ONCE() calls for the 'prev' and 'next' variables inside plist_check_list() was a misapplication. WRITE_ONCE() is fundamentally a compiler barrier designed to prevent compiler optimizations (like caching or reordering) on shared memory locations. However, the variables 'prev' and 'next' are local, stack-allocated pointers accessed only by the current thread's invocation of the function. Since these pointers are thread-local and are never accessed concurrently, applying WRITE_ONCE() to them is semantically incorrect and unnecessary. Furthermore, the use of WRITE_ONCE() on local variables prevents the compiler from performing standard optimizations, such as keeping these variables cached solely in CPU registers throughout the loop, potentially introducing performance overhead. Restore the conventional C assignment for local loop variables, allowing the compiler to generate optimal code. Link: https://lkml.kernel.org/r/20251113193413.499309-1-visitorckw@gmail.com Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com> Cc: I Hsin Cheng <richard120310@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib: test_mul_u64_u64_div_u64(): test the 32bit code on 64bitDavid Laight
There are slight differences in the mul_u64_add_u64_div_u64() code between 32bit and 64bit systems. Compile and test the 32bit version on 64bit hosts for better test coverage. Link: https://lkml.kernel.org/r/20251105201035.64043-10-david.laight.linux@gmail.com Signed-off-by: David Laight <david.laight.linux@gmail.com> Reviewed-by: Nicolas Pitre <npitre@baylibre.com> Cc: Biju Das <biju.das.jz@bp.renesas.com> Cc: Borislav Betkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Li RongQing <lirongqing@baidu.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib: mul_u64_u64_div_u64(): optimise the divide codeDavid Laight
Replace the bit by bit algorithm with one that generates 16 bits per iteration on 32bit architectures and 32 bits on 64bit ones. On my zen 5 this reduces the time for the tests (using the generic code) from ~3350ns to ~1000ns. Running the 32bit algorithm on 64bit x86 takes ~1500ns. It'll be slightly slower on a real 32bit system, mostly due to register pressure. The savings for 32bit x86 are much higher (tested in userspace). The worst case (lots of bits in the quotient) drops from ~900 clocks to ~130 (pretty much independant of the arguments). Other 32bit architectures may see better savings. It is possibly to optimise for divisors that span less than __LONG_WIDTH__/2 bits. However I suspect they don't happen that often and it doesn't remove any slow cpu divide instructions which dominate the result. Typical improvements for 64bit random divides: old new sandy bridge: 470 150 haswell: 400 144 piledriver: 960 467 I think rdpmc is very slow. zen5: 244 80 (Timing is 'rdpmc; mul_div(); rdpmc' with the multiply depending on the first rdpmc and the second rdpmc depending on the quotient.) Object code (64bit x86 test program): old 0x173 new 0x141. Link: https://lkml.kernel.org/r/20251105201035.64043-9-david.laight.linux@gmail.com Signed-off-by: David Laight <david.laight.linux@gmail.com> Reviewed-by: Nicolas Pitre <npitre@baylibre.com> Cc: Biju Das <biju.das.jz@bp.renesas.com> Cc: Borislav Betkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Li RongQing <lirongqing@baidu.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib: mul_u64_u64_div_u64(): optimise multiply on 32bit x86David Laight
gcc generates horrid code for both ((u64)u32_a * u32_b) and (u64_a + u32_b). As well as the extra instructions it can generate a lot of spills to stack (including spills of constant zeros and even multiplies by constant zero). mul_u32_u32() already exists to optimise the multiply. Add a similar add_u64_32() for the addition. Disable both for clang - it generates better code without them. Move the 64x64 => 128 multiply into a static inline helper function for code clarity. No need for the a/b_hi/lo variables, the implicit casts on the function calls do the work for us. Should have minimal effect on the generated code. Use mul_u32_u32() and add_u64_u32() in the 64x64 => 128 multiply in mul_u64_add_u64_div_u64(). Link: https://lkml.kernel.org/r/20251105201035.64043-8-david.laight.linux@gmail.com Signed-off-by: David Laight <david.laight.linux@gmail.com> Reviewed-by: Nicolas Pitre <npitre@baylibre.com> Cc: Biju Das <biju.das.jz@bp.renesas.com> Cc: Borislav Betkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Li RongQing <lirongqing@baidu.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib: test_mul_u64_u64_div_u64(): test both generic and arch versionsDavid Laight
Change the #if in div64.c so that test_mul_u64_u64_div_u64.c can compile and test the generic version (including the 'long multiply') on architectures (eg amd64) that define their own copy. Test the kernel version and the locally compiled version on all arch. Output the time taken (in ns) on the 'test completed' trace. For reference, on my zen 5, the optimised version takes ~220ns and the generic version ~3350ns. Using the native multiply saves ~200ns and adding back the ilog2() 'optimisation' test adds ~50ms. Link: https://lkml.kernel.org/r/20251105201035.64043-7-david.laight.linux@gmail.com Signed-off-by: David Laight <david.laight.linux@gmail.com> Reviewed-by: Nicolas Pitre <npitre@baylibre.com> Cc: Biju Das <biju.das.jz@bp.renesas.com> Cc: Borislav Betkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Li RongQing <lirongqing@baidu.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib: add tests for mul_u64_u64_div_u64_roundup()David Laight
Replicate the existing mul_u64_u64_div_u64() test cases with round up. Update the shell script that verifies the table, remove the comment markers so that it can be directly pasted into a shell. Rename the divisor from 'c' to 'd' to match mul_u64_add_u64_div_u64(). It any tests fail then fail the module load with -EINVAL. Link: https://lkml.kernel.org/r/20251105201035.64043-6-david.laight.linux@gmail.com Signed-off-by: David Laight <david.laight.linux@gmail.com> Reviewed-by: Nicolas Pitre <npitre@baylibre.com> Cc: Biju Das <biju.das.jz@bp.renesas.com> Cc: Borislav Betkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Li RongQing <lirongqing@baidu.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib: add mul_u64_add_u64_div_u64() and mul_u64_u64_div_u64_roundup()David Laight
The existing mul_u64_u64_div_u64() rounds down, a 'rounding up' variant needs 'divisor - 1' adding in between the multiply and divide so cannot easily be done by a caller. Add mul_u64_add_u64_div_u64(a, b, c, d) that calculates (a * b + c)/d and implement the 'round down' and 'round up' using it. Update the x86-64 asm to optimise for 'c' being a constant zero. Add kerndoc definitions for all three functions. Link: https://lkml.kernel.org/r/20251105201035.64043-5-david.laight.linux@gmail.com Signed-off-by: David Laight <david.laight.linux@gmail.com> Reviewed-by: Nicolas Pitre <npitre@baylibre.com> Cc: Biju Das <biju.das.jz@bp.renesas.com> Cc: Borislav Betkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Li RongQing <lirongqing@baidu.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib: mul_u64_u64_div_u64(): simplify check for a 64bit productDavid Laight
If the product is only 64bits div64_u64() can be used for the divide. Replace the pre-multiply check (ilog2(a) + ilog2(b) <= 62) with a simple post-multiply check that the high 64bits are zero. This has the advantage of being simpler, more accurate and less code. It will always be faster when the product is larger than 64bits. Most 64bit cpu have a native 64x64=128 bit multiply, this is needed (for the low 64bits) even when div64_u64() is called - so the early check gains nothing and is just extra code. 32bit cpu will need a compare (etc) to generate the 64bit ilog2() from two 32bit bit scans - so that is non-trivial. (Never mind the mess of x86's 'bsr' and any oddball cpu without fast bit-scan instructions.) Whereas the additional instructions for the 128bit multiply result are pretty much one multiply and two adds (typically the 'adc $0,%reg' can be run in parallel with the instruction that follows). The only outliers are 64bit systems without 128bit mutiply and simple in order 32bit ones with fast bit scan but needing extra instructions to get the high bits of the multiply result. I doubt it makes much difference to either, the latter is definitely not mainstream. If anyone is worried about the analysis they can look at the generated code for x86 (especially when cmov isn't used). Link: https://lkml.kernel.org/r/20251105201035.64043-4-david.laight.linux@gmail.com Signed-off-by: David Laight <david.laight.linux@gmail.com> Reviewed-by: Nicolas Pitre <npitre@baylibre.com> Cc: Biju Das <biju.das.jz@bp.renesas.com> Cc: Borislav Betkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Li RongQing <lirongqing@baidu.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib: mul_u64_u64_div_u64(): combine overflow and divide by zero checksDavid Laight
Since the overflow check always triggers when the divisor is zero move the check for divide by zero inside the overflow check. This means there is only one test in the normal path. Link: https://lkml.kernel.org/r/20251105201035.64043-3-david.laight.linux@gmail.com Signed-off-by: David Laight <david.laight.linux@gmail.com> Reviewed-by: Nicolas Pitre <npitre@baylibre.com> Cc: Biju Das <biju.das.jz@bp.renesas.com> Cc: Borislav Betkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Li RongQing <lirongqing@baidu.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib: mul_u64_u64_div_u64(): rename parameter 'c' to 'd'David Laight
Patch series "Implement mul_u64_u64_div_u64_roundup()", v5. The pwm-stm32.c code wants a 'rounding up' version of mul_u64_u64_div_u64(). This can be done simply by adding 'divisor - 1' to the 128bit product. Implement mul_u64_add_u64_div_u64(a, b, c, d) = (a * b + c)/d based on the existing code. Define mul_u64_u64_div_u64(a, b, d) as mul_u64_add_u64_div_u64(a, b, 0, d) and mul_u64_u64_div_u64_roundup(a, b, d) as mul_u64_add_u64_div_u64(a, b, d-1, d). Only x86-64 has an optimsed (asm) version of the function. That is optimised to avoid the 'add c' when c is known to be zero. In all other cases the extra code will be noise compared to the software divide code. The test module has been updated to test mul_u64_u64_div_u64_roundup() and also enhanced it to verify the C division code on x86-64 and the 32bit division code on 64bit. This patch (of 9): Change to prototype from mul_u64_u64_div_u64(u64 a, u64 b, u64 c) to mul_u64_u64_div_u64(u64 a, u64 b, u64 d). Using 'd' for 'divisor' makes more sense. An upcoming change adds a 'c' parameter to calculate (a * b + c)/d. Link: https://lkml.kernel.org/r/20251105201035.64043-1-david.laight.linux@gmail.com Link: https://lkml.kernel.org/r/20251105201035.64043-2-david.laight.linux@gmail.com Signed-off-by: David Laight <david.laight.linux@gmail.com> Reviewed-by: Nicolas Pitre <npitre@baylibre.com> Cc: Biju Das <biju.das.jz@bp.renesas.com> Cc: Borislav Betkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Li RongQing <lirongqing@baidu.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib/Kconfig.debug: cleanup CONFIG_DEBUG_SECTION_MISMATCH help textBorislav Petkov (AMD)
Simplify formulations, correct flow, split it into proper paragraphs and update structure. No functional changes. Link: https://lkml.kernel.org/r/20251029122743.1110-1-bp@kernel.org Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20panic: sys_info: factor out read and write handlersAndy Shevchenko
For the sake of the code readability and easier maintenance factor out read and write sys_info handlers. [akpm@linux-foundation.org: coding-style cleanups] Link: https://lkml.kernel.org/r/20251030132007.3742368-7-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Feng Tang <feng.tang@linux.alibaba.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20panic: sys_info: deduplicate local variable 'table; assignmentsAndy Shevchenko
The both handlers use the local 'table' variable and assign the same data to it, deduplicate that. Link: https://lkml.kernel.org/r/20251030132007.3742368-6-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Feng Tang <feng.tang@linux.alibaba.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20panic: sys_info: rewrite a fix for a compilation error (`make W=1`)Andy Shevchenko
Compiler was not happy about dead variable in use: lib/sys_info.c:52:19: error: variable 'sys_info_avail' is not needed and will not be emitted [-Werror,-Wunneeded-internal-declaration] 52 | static const char sys_info_avail[] = "tasks,mem,timers,locks,ftrace,all_bt,blocked_tasks"; | ^~~~~~~~~~~~~~ This was fixed by adding __maybe_unused attribute that just hides the issue and didn't actually fix the root cause. Rewrite the fix by moving the local variable from stack to a heap. As a side effect this drops unneeded "synchronisation" of duplicative info and also makes code ready for the further refactoring. Link: https://lkml.kernel.org/r/20251030132007.3742368-5-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Feng Tang <feng.tang@linux.alibaba.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20panic: sys_info:replace struct sys_info_name with plain array of stringsAndy Shevchenko
There is no need to keep a custom structure just for the need of a plain array of strings. Replace struct sys_info_name with plain array of strings. With that done, simplify the code, in particular, naturally use for_each_set_bit() when iterating over si_bits_global bitmap. Link: https://lkml.kernel.org/r/20251030132007.3742368-4-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Cc: Feng Tang <feng.tang@linux.alibaba.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20panic: sys_info: align constant definition names with parametersAndy Shevchenko
Align constant definition names with parameters to make it easier to map. It's also better to maintain and extend the names while keeping their uniqueness. Link: https://lkml.kernel.org/r/20251030132007.3742368-3-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Feng Tang <feng.tang@linux.alibaba.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20panic: sys_info: capture si_bits_global before iterating over itAndy Shevchenko
Patch series "panic: sys_info: Refactor and fix a potential issue", v3. While targeting the compilation issue due to dangling variable, I have noticed more opportunities for refactoring that helps to avoid above mentioned compilation issue in a cleaner way and also fixes a potential problem with global variable access. This patch (of 6): The for-loop might re-read the content of the memory the si_bits_global points to on each iteration. Instead, just capture it for the sake of consistency and use that instead. Link: https://lkml.kernel.org/r/20251030132007.3742368-1-andriy.shevchenko@linux.intel.com Link: https://lkml.kernel.org/r/20251030132007.3742368-2-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Feng Tang <feng.tang@linux.alibaba.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20lib/alloc_tag: use %pe format specifierZeng Chi
The %pe format specifier is designed to print error pointers. It prints a symbolic error name (eg. -EINVAL) and it makes the code simpler by omitting PTR_ERR(); This patch fixes this cocci report: lib/alloc_tag.c:776:63-70: WARNING: Consider using %pe to print PTR_ERR() Link: https://lkml.kernel.org/r/20251105023925.1447482-1-zeng_chi911@163.com Signed-off-by: Zeng Chi <zengchi@kylinos.cn> Acked-by: SeongJae Park <sj@kernel.org> Acked-by: Suren Baghdasaryan <surenb@google.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-20gpio: improve support for shared GPIOsMark Brown
Merge series from Bartosz Golaszewski <brgl@bgdev.pl>: Problem statement: GPIOs are implemented as a strictly exclusive resource in the kernel but there are lots of platforms on which single pin is shared by multiple devices which don't communicate so need some way of properly sharing access to a GPIO. What we have now is the GPIOD_FLAGS_BIT_NONEXCLUSIVE flag which was introduced as a hack and doesn't do any locking or arbitration of access - it literally just hand the same GPIO descriptor to all interested users. The proposed solution is composed of three major parts: the high-level, shared GPIO proxy driver that arbitrates access to the shared pin and exposes a regular GPIO chip interface to consumers, a low-level shared GPIOLIB module that scans firmware nodes and creates auxiliary devices that attach to the proxy driver and finally a set of core GPIOLIB changes that plug the former into the GPIO lookup path. The changes are implemented in a way that allows to seamlessly compile out any code related to sharing GPIOs for systems that don't need it. The practical use-case for this are the powerdown GPIOs shared by speakers on Qualcomm db845c platform, however I have also extensively tested it using gpio-virtuser on arm64 qemu with various DT configurations.
2025-11-20cpumask: Introduce cpumask_weighted_or()Thomas Gleixner
CID management OR's two cpumasks and then calculates the weight on the result. That's inefficient as that has to walk the same stuff twice. As this is done with runqueue lock held, there is a real benefit of speeding this up. Depending on the system this results in 10-20% less cycles spent with runqueue lock held for a 4K cpumask. Provide cpumask_weighted_or() and the corresponding bitmap functions which return the weight of the OR result right away. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Yury Norov (NVIDIA) <yury.norov@gmail.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://patch.msgid.link/20251119172549.448263340@linutronix.de
2025-11-19raid6: test: Add support for RISC-VChunyan Zhang
Add RISC-V code to be compiled to allow the userspace raid6test program to be built and run on RISC-V. Signed-off-by: Chunyan Zhang <zhang.lyra@gmail.com> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://patch.msgid.link/20250718072711.3865118-6-zhangchunyan@iscas.ac.cn Signed-off-by: Paul Walmsley <pjw@kernel.org>
2025-11-19raid6: riscv: Allow code to be compiled in userspaceChunyan Zhang
To support userspace raid6test, this patch adds __KERNEL__ ifdef for kernel header inclusions also userspace wrapper definitions to allow code to be compiled in userspace. This patch also drops the NSIZE macro, instead of using the vector length, which can work for both kernel and user space. Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Signed-off-by: Chunyan Zhang <zhangchunyan@iscas.ac.cn> Link: https://patch.msgid.link/20250718072711.3865118-5-zhangchunyan@iscas.ac.cn Signed-off-by: Paul Walmsley <pjw@kernel.org>
2025-11-19raid6: riscv: Prevent compiler from breaking inline vector assembly codeChunyan Zhang
To prevent the compiler from breaking the inline vector assembly code, this code must be built without compiler support for vector. Signed-off-by: Chunyan Zhang <zhangchunyan@iscas.ac.cn> Link: https://patch.msgid.link/20250718072711.3865118-4-zhangchunyan@iscas.ac.cn [pjw@kernel.org: cleaned up commit message] Signed-off-by: Paul Walmsley <pjw@kernel.org>
2025-11-19lib/vsprintf: Add specifier for printing struct timespec64Andy Shevchenko
A handful drivers want to print a content of the struct timespec64 in a format of %lld:%09ld. In order to make their lives easier, add the respecting specifier directly to the printf() implementation. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Tested-by: Petr Mladek <pmladek@suse.com> Link: https://patch.msgid.link/20251113150217.3030010-2-andriy.shevchenko@linux.intel.com Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-11-19lib/vsprintf: Deduplicate special hex number specifier dataAndy Shevchenko
Two functions use the same specifier data for the special hex number. Almost the same as the field width is calculated on the size of the given type. Due to that, make a compound literal macro in order to deduplicate the rest. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Tested-by: Petr Mladek <pmladek@suse.com> Link: https://patch.msgid.link/20251113150313.3030700-1-andriy.shevchenko@linux.intel.com Signed-off-by: Petr Mladek <pmladek@suse.com>
2025-11-18lib/strn*,uaccess: Use masked_user_{read/write}_access_begin when requiredChristophe Leroy
Properly use masked_user_read_access_begin() and masked_user_write_access_begin() instead of masked_user_access_begin() in order to match user_read_access_end() and user_write_access_end(). This is important for architectures like PowerPC that enable separately user reads and user writes. That means masked_user_read_access_begin() is used when user memory is exclusively read during the window and masked_user_write_access_begin() is used when user memory is exclusively writen during the window. masked_user_access_begin() remains and is used when both reads and writes are performed during the open window. Each of them is expected to be terminated by the matching user_read_access_end(), user_write_access_end() and user_access_end(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/cb5e4b0fa49ea9c740570949d5e3544423389757.1763396724.git.christophe.leroy@csgroup.eu
2025-11-18iov_iter: Add missing speculation barrier to copy_from_user_iter()Christophe Leroy
The results of "access_ok()" can be mis-speculated. The result is that the CPU can end speculatively: if (access_ok(from, size)) // Right here For the same reason as done in copy_from_user() in commit 74e19ef0ff80 ("uaccess: Add speculation barrier to copy_from_user()"), add a speculation barrier to copy_from_user_iter(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/6b73e69cc7168c89df4eab0a216e3ed4cca36b0a.1763396724.git.christophe.leroy@csgroup.eu
2025-11-18iov_iter: Convert copy_from_user_iter() to masked user accessChristophe Leroy
copy_from_user_iter() lacks a speculation barrier, which will degrade performance on some architecture like x86, which would be unfortunate as copy_from_user_iter() is a critical hotpath function. Convert copy_from_user_iter() to using masked user access on architecture that support it. This allows to add the speculation barrier without impacting performance. This is similar to what was done for copy_from_user() in commit 0fc810ae3ae1 ("x86/uaccess: Avoid barrier_nospec() in 64-bit copy_from_user()") [ tglx: Massage change log ] Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/58e4b07d469ca68a2b9477fe2c1ccc8a44cef131.1763396724.git.christophe.leroy@csgroup.eu
2025-11-17string: provide strends()Bartosz Golaszewski
Implement a function for checking if a string ends with a different string and add its kunit test cases. Acked-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20251112-gpio-shared-v4-1-b51f97b1abd8@linaro.org Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
2025-11-16lib/test_vmalloc: remove xfail condition checkUladzislau Rezki (Sony)
A test marked with "xfail = true" is expected to fail but that does not mean it is predetermined to fail. Remove "xfail" condition check for tests which pass successfully. Link: https://lkml.kernel.org/r/20251007122035.56347-3-urezki@gmail.com Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Baoquan He <bhe@redhat.com> Cc: Marco Elver <elver@google.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-16lib/test_vmalloc: add no_block_alloc_test caseUladzislau Rezki (Sony)
Patch series "__vmalloc()/kvmalloc() and no-block support", v4. This patch (of 10): Introduce a new test case "no_block_alloc_test" that verifies non-blocking allocations using __vmalloc() with GFP_ATOMIC and GFP_NOWAIT flags. It is recommended to build kernel with CONFIG_DEBUG_ATOMIC_SLEEP enabled to help catch "sleeping while atomic" issues. This test ensures that memory allocation logic under atomic constraints does not inadvertently sleep. Link: https://lkml.kernel.org/r/20251007122035.56347-2-urezki@gmail.com Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Cc: Baoquan He <bhe@redhat.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Marco Elver <elver@google.com> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-15lib/test_kho: check if KHO is enabledPasha Tatashin
We must check whether KHO is enabled prior to issuing KHO commands, otherwise KHO internal data structures are not initialized. Link: https://lkml.kernel.org/r/20251106220635.2608494-1-pasha.tatashin@soleen.com Fixes: b753522bed0b ("kho: add test for kexec handover") Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com> Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202511061629.e242724-lkp@intel.com Reviewed-by: Pratyush Yadav <pratyush@kernel.org> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Cc: Alexander Graf <graf@amazon.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-11-14Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf after 6.18-rc5+Alexei Starovoitov
Cross-merge BPF and other fixes after downstream PR. Minor conflict in kernel/bpf/helpers.c Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-11-14kunit: Make filter parameters configurable via KconfigThomas Weißschuh
Enable the preset of filter parameters from kconfig options, similar to how other KUnit configuration parameters are handled already. This is useful to run a subset of tests even if the cmdline is not readily modifyable. Link: https://lore.kernel.org/r/20251106-kunit-filter-kconfig-v1-1-d723fb7ac221@linutronix.de Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2025-11-13Merge tag 'v6.18-rc5' into objtool/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-11-12Merge tag 'arm64-fpsimd-on-stack-for-v6.19' into libcrypto-fpsimd-on-stackEric Biggers
Pull fpsimd-on-stack changes from Ard Biesheuvel: "Shared tag/branch for arm64 FP/SIMD changes going through libcrypto" Signed-off-by: Eric Biggers <ebiggers@kernel.org>