| Age | Commit message (Collapse) | Author |
|
There is an audio channel shift issue with multi channel case - the
channel order is correct for the first run, but the channel order is
shifted for the second run. The fix method is to reset the PAI interface
at the end of playback.
The reset can be handled by PM runtime, so enable PM runtime.
Fixes: 0205fae6327a ("drm/bridge: imx: add driver for HDMI TX Parallel Audio Interface")
Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
Reviewed-by: Liu Ying <victor.liu@nxp.com>
Signed-off-by: Liu Ying <victor.liu@nxp.com>
Link: https://lore.kernel.org/r/20260130080910.3532724-1-shengjiu.wang@nxp.com
|
|
Add quirk to support microphone input through headphone jack on Acer Nitro 5 AN515-57 (ALC295).
Signed-off-by: Breno Baptista <brenomb07@gmail.com>
Link: https://patch.msgid.link/20260205024341.26694-1-brenomb07@gmail.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
nft_map_catchall_activate() has an inverted element activity check
compared to its non-catchall counterpart nft_mapelem_activate() and
compared to what is logically required.
nft_map_catchall_activate() is called from the abort path to re-activate
catchall map elements that were deactivated during a failed transaction.
It should skip elements that are already active (they don't need
re-activation) and process elements that are inactive (they need to be
restored). Instead, the current code does the opposite: it skips inactive
elements and processes active ones.
Compare the non-catchall activate callback, which is correct:
nft_mapelem_activate():
if (nft_set_elem_active(ext, iter->genmask))
return 0; /* skip active, process inactive */
With the buggy catchall version:
nft_map_catchall_activate():
if (!nft_set_elem_active(ext, genmask))
continue; /* skip inactive, process active */
The consequence is that when a DELSET operation is aborted,
nft_setelem_data_activate() is never called for the catchall element.
For NFT_GOTO verdict elements, this means nft_data_hold() is never
called to restore the chain->use reference count. Each abort cycle
permanently decrements chain->use. Once chain->use reaches zero,
DELCHAIN succeeds and frees the chain while catchall verdict elements
still reference it, resulting in a use-after-free.
This is exploitable for local privilege escalation from an unprivileged
user via user namespaces + nftables on distributions that enable
CONFIG_USER_NS and CONFIG_NF_TABLES.
Fix by removing the negation so the check matches nft_mapelem_activate():
skip active elements, process inactive ones.
Fixes: 628bd3e49cba ("netfilter: nf_tables: drop map element references from preparation phase")
Signed-off-by: Andrew Fasano <andrew.fasano@nist.gov>
Signed-off-by: Florian Westphal <fw@strlen.de>
|
|
https://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless
Johannes Berg says:
====================
Two last-minute iwlwifi fixes:
- cancel mlo_scan_work on disassoc to avoid
use-after-free/init-after-queue issues
- pause TCM work on suspend to avoid crashing
the FW (and sometimes the host) on resume
with traffic
* tag 'wireless-2026-02-04' of https://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless:
wifi: iwlwifi: mvm: pause TCM on fast resume
wifi: iwlwifi: mld: cancel mlo_scan_start_wk
====================
Link: https://patch.msgid.link/20260204113547.159742-4-johannes@sipsolutions.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Set UBLK_TEST_DIR to ${TMPDIR:-./ublktest-dir}/${TID}.XXXXXX to create
per-test subdirectories organized by test ID. This makes it easier to
identify and debug specific test runs.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Secure erase should use max_secure_erase_sectors instead of being limited
by max_discard_sectors. Separate the handling of REQ_OP_SECURE_ERASE from
REQ_OP_DISCARD to allow each operation to use its own size limit.
Signed-off-by: Luke Wang <ziniu.wang_1@nxp.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Kumar Kartikeya Dwivedi says:
====================
Fix for bpf_wq retry loop during free
Small fix and improvement to ensure cancel_work() can handle the case
where wq callback is running, and doesn't lead to call_rcu_tasks_trace()
repeatedly after failing cancel_work, if wq callback is not pending.
====================
Link: https://patch.msgid.link/20260205003853.527571-1-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Replace prog and callback in bpf_async_cb after removing visibility of
bpf_async_cb in bpf_async_cancel_and_free() to increase the chances the
scheduled async callbacks short-circuit execution and exit early, and
not starting a RCU tasks trace section. This improves the overall time
spent in running the wq selftest.
Suggested-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20260205003853.527571-3-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
When freeing a bpf_async_cb in bpf_async_cb_rcu_tasks_trace_free(), in
case the wq callback is not scheduled, doing cancel_work() currently
returns false and leads to retry of RCU tasks trace grace period. If the
callback is never scheduled, we keep retrying indefinitely and don't put
the prog reference.
Since the only race we care about here is against a potentially running
wq callback in the first grace period, it should finish by the second
grace period, hence check work_busy() result to detect presence of
running wq callback if it's not pending, otherwise free the object
immediately without retrying.
Reasoning behind the check and its correctness with racing wq callback
invocation: cancel_work is supposed to be synchronized, hence calling it
first and getting false would mean that work is definitely not pending,
at this point, either the work is not scheduled at all or already
running, or we race and it already finished by the time we checked for
it using work_busy(). In case it is running, we synchronize using
pool->lock to check the current work running there, if we match, it
means we extend the wait by another grace period using retry = true,
otherwise either the work already finished running or was never
scheduled, so we can free the bpf_async_cb right away.
Fixes: 1bfbc267ec91 ("bpf: Enable bpf_timer and bpf_wq in any context")
Reported-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20260205003853.527571-2-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull misc fixes from Andrew Morton:
"Five hotfixes. Two are cc:stable, two are for MM.
All are singletons - please see the changelogs for details"
* tag 'mm-hotfixes-stable-2026-02-04-15-55' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
Documentation: document liveupdate cmdline parameter
mm, shmem: prevent infinite loop on truncate race
mailmap: update Alexander Mikhalitsyn's emails
liveupdate: luo_file: do not clear serialized_data on unfreeze
x86/kfence: fix booting on 32bit non-PAE systems
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/devsec/tsm
Pull TSM (TEE security Manager) fixes from Dan Williams:
"The largest change is reverting part of an ABI that never shipped in a
released kernel (Documentation/ABI/testing/sysfs-class-tsm). The fix /
replacement for that is too large to squeeze in at this late date.
The rest is a collection of small fixups:
- Fix multiple streams per host bridge for SEV-TIO
- Drop the TSM ABI for reporting IDE streams (to be replaced)
- Fix virtual function enumeration
- Fix reserved stream ID initialization
- Fix unused variable compiler warning"
* tag 'tsm-fixes-for-6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/devsec/tsm:
crypto/ccp: Allow multiple streams on the same root bridge
crypto/ccp: Use PCI bridge defaults for IDE
coco/tsm: Remove unused variable tsm_rwsem
PCI/IDE: Fix reading a wrong reg for unused sel stream initialization
PCI/IDE: Fix off by one error calculating VF RID range
Revert "PCI/TSM: Report active IDE streams"
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext
Pull sched_ext fix from Tejun Heo:
- Fix race where sched_class operations (sched_setscheduler() and
friends) could be invoked on dead tasks after sched_ext_dead()
already ran, causing invalid SCX task state transitions and NULL
pointer dereferences.
This was a regression from the cgroup exit ordering fix which
moved sched_ext_free() to finish_task_switch().
* tag 'sched_ext-for-6.19-rc8-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext:
sched_ext: Short-circuit sched_class operations on dead tasks
|
|
This is a printf-style function, which gcc -Werror=suggest-attribute=format
correctly points out:
drivers/hwmon/occ/common.c: In function 'occ_init_attribute':
drivers/hwmon/occ/common.c:761:9: error: function 'occ_init_attribute' might be a candidate for 'gnu_printf' format attribute [-Werror=suggest-attribute=format]
Add the attribute to avoid this warning and ensure any incorrect
format strings are detected here.
Fixes: 744c2fe950e9 ("hwmon: (occ) Rework attribute registration for stack usage")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20260203163440.2674340-1-arnd@kernel.org
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
|
|
7900aa699c34 ("sched_ext: Fix cgroup exit ordering by moving sched_ext_free()
to finish_task_switch()") moved sched_ext_free() to finish_task_switch() and
renamed it to sched_ext_dead() to fix cgroup exit ordering issues. However,
this created a race window where certain sched_class ops may be invoked on
dead tasks leading to failures - e.g. sched_setscheduler() may try to switch a
task which finished sched_ext_dead() back into SCX triggering invalid SCX task
state transitions.
Add task_dead_and_done() which tests whether a task is TASK_DEAD and has
completed its final context switch, and use it to short-circuit sched_class
operations which may be called on dead tasks.
Fixes: 7900aa699c34 ("sched_ext: Fix cgroup exit ordering by moving sched_ext_free() to finish_task_switch()")
Reported-by: Andrea Righi <arighi@nvidia.com>
Link: http://lkml.kernel.org/r/20260202151341.796959-1-arighi@nvidia.com
Reviewed-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
|
|
Puranjay Mohan says:
====================
bpf: Improve linked register tracking
V3: https://lore.kernel.org/all/20260203222643.994713-1-puranjay@kernel.org/
Changes in v3->v4:
- Add a call to reg_bounds_sync() in sync_linked_regs() to sync bounds after alu op.
- Add a __sink(path[0]); in the C selftest so compiler doesn't say "error: variable 'path' set but
not used"
V2: https://lore.kernel.org/all/20260113152529.3217648-1-puranjay@kernel.org/
Changes in v2->v3:
- Added another selftest showing a real usage pattern
- Rebased on bpf-next/master
v1: https://lore.kernel.org/bpf/20260107203941.1063754-1-puranjay@kernel.org/
Changes in v1->v2:
- Add support for alu32 operations in linked register tracking (Alexei)
- Squash the selftest fix with the first patch (Eduard)
- Add more selftests to detect edge cases
This series extends the BPF verifier's linked register tracking to handle
negative offsets, BPF_SUB operations, and alu32 operations, enabling better bounds propagation for
common arithmetic patterns.
The verifier previously only tracked positive constant deltas between linked
registers using BPF_ADD. This meant patterns using negative offsets or
subtraction couldn't benefit from bounds propagation:
void alu32_negative_offset(void)
{
volatile char path[5];
volatile int offset = bpf_get_prandom_u32();
int off = offset;
if (off >= 5 && off < 10)
path[off - 5] = '.';
}
this gets compiled to:
0000000000000478 <alu32_negative_offset>:
143: call 0x7
144: *(u32 *)(r10 - 0xc) = w0
145: w1 = *(u32 *)(r10 - 0xc)
146: w2 = w1 // w2 and w1 share the same id
147: w2 += -0x5 // verifier knows w1 = w2 + 5
148: if w2 > 0x4 goto +0x5 <L0> // in fall-through: verifier knows w2 ∈ [0,4] => w1 ∈ [5, 9]
149: r2 = r10
150: r2 += -0x5 // r2 = fp - 5
151: r2 += r1 // r2 = fp - 5 + r1 (∈ [5, 9]) => r2 ∈ [fp, fp + 4]
152: w1 = 0x2e
153: *(u8 *)(r2 - 0x5) = w1 // r2 ∈ [fp, fp + 4] => r2 - 5 ∈ [fp - 5, fp - 1]
<L0>:
154: exit
After the changes, the verifier could link 32-bit scalars and also supported -ve offsets for linking:
146: w2 = w1
147: w2 += -0x5
It allowed the verifier to correctly propagate bounds, without the
changes in this patchset, verifier would reject this program with:
invalid unbounded variable-offset write to stack R2
This program has been added as a selftest in the second patch.
Veristat comparison on programs from sched_ext, selftests, and some meta internal programs:
Scx Progs
File Program Verdict (A) Verdict (B) Verdict (DIFF) Insns (A) Insns (B) Insns (DIFF)
----------------- ---------------- ----------- ----------- -------------- --------- --------- -------------
scx_layered.bpf.o layered_runnable success success MATCH 5674 6077 +403 (+7.10%)
FB Progs
File Program Verdict (A) Verdict (B) Verdict (DIFF) Insns (A) Insns (B) Insns (DIFF)
------------ ---------------- ----------- ----------- -------------- --------- --------- -----------------
bpf232.bpf.o layered_dump success success MATCH 1151 1218 +67 (+5.82%)
bpf257.bpf.o layered_runnable success success MATCH 5743 6143 +400 (+6.97%)
bpf252.bpf.o layered_runnable success success MATCH 5677 6075 +398 (+7.01%)
bpf227.bpf.o layered_dump success success MATCH 915 982 +67 (+7.32%)
bpf239.bpf.o layered_runnable success success MATCH 5459 5861 +402 (+7.36%)
bpf246.bpf.o layered_runnable success success MATCH 5562 6008 +446 (+8.02%)
bpf229.bpf.o layered_runnable success success MATCH 2559 3011 +452 (+17.66%)
bpf231.bpf.o layered_runnable success success MATCH 2559 3011 +452 (+17.66%)
bpf234.bpf.o layered_runnable success success MATCH 2549 3001 +452 (+17.73%)
bpf019.bpf.o do_sendmsg success success MATCH 124823 153523 +28700 (+22.99%)
bpf019.bpf.o do_parse success success MATCH 124809 153509 +28700 (+23.00%)
bpf227.bpf.o layered_runnable success success MATCH 1915 2356 +441 (+23.03%)
bpf228.bpf.o layered_runnable success success MATCH 1700 2152 +452 (+26.59%)
bpf232.bpf.o layered_runnable success success MATCH 1499 1951 +452 (+30.15%)
bpf312.bpf.o mount_exit success success MATCH 19253 62883 +43630 (+226.61%)
bpf312.bpf.o umount_exit success success MATCH 19253 62883 +43630 (+226.61%)
bpf311.bpf.o mount_exit success success MATCH 19226 62863 +43637 (+226.97%)
bpf311.bpf.o umount_exit success success MATCH 19226 62863 +43637 (+226.97%)
The above four programs have specific patters that make the verifier explore a lot more states:
for (; depth < MAX_DIR_DEPTH; depth++) {
const unsigned char* name = BPF_CORE_READ(dentry, d_name.name);
if (offset >= MAX_PATH_LEN - MAX_DIR_LEN) {
return depth;
}
int len = bpf_probe_read_kernel_str(&path[offset], MAX_DIR_LEN, name);
offset += len;
if (len == MAX_DIR_LEN) {
if (offset - 2 < MAX_PATH_LEN) { // <---- (a)
path[offset - 2] = '.';
}
if (offset - 3 < MAX_PATH_LEN) { // <---- (b)
path[offset - 3] = '.';
}
if (offset - 4 < MAX_PATH_LEN) { // <---- (c)
path[offset - 4] = '.';
}
}
}
When at some depth == N false branches of conditions (a), (b) and (c) are scheduled for
verification, constraints for offset at depth == N+1 are:
1. offset >= MAX_PATH_LEN + 2
2. offset >= MAX_PATH_LEN + 3
3. offset >= MAX_PATH_LEN + 4 (visited before others)
And after offset += len it becomes:
1. offset >= MAX_PATH_LEN - 4093
2. offset >= MAX_PATH_LEN - 4092
3. offset >= MAX_PATH_LEN - 4091 (visited before others)
Because of the DFS states exploration logic, the states above are visited in order 3, 2, 1; 3 is not
a subset of 2 and 1 is not a subset of 2, so pruning logic does not kick in.
Previously this was not a problem, because range for offset was not propagated through the
statements (a), (b), (c).
As the root cause of this regression is understood, this is not a blocker for this change.
Selftest Progs
File Program Verdict (A) Verdict (B) Verdict (DIFF) Insns (A) Insns (B) Insns (DIFF)
---------------------------------- ------------------------ ----------- ----------- -------------- --------- --------- --------------
linked_list_peek.bpf.o list_peek success success MATCH 152 88 -64 (-42.11%)
verifier_iterating_callbacks.bpf.o cond_break2 success success MATCH 110 88 -22 (-20.00%)
These are the added selftests that failed earlier but are passing now:
verifier_linked_scalars.bpf.o alu32_negative_offset failure success MISMATCH 11 13 +2 (+18.18%)
verifier_linked_scalars.bpf.o scalars_alu32_big_offset failure success MISMATCH 7 10 +3 (+42.86%)
verifier_linked_scalars.bpf.o scalars_neg_alu32_add failure success MISMATCH 7 10 +3 (+42.86%)
verifier_linked_scalars.bpf.o scalars_neg_alu32_sub failure success MISMATCH 7 10 +3 (+42.86%)
verifier_linked_scalars.bpf.o scalars_neg failure success MISMATCH 7 10 +3 (+42.86%)
verifier_linked_scalars.bpf.o scalars_neg_sub failure success MISMATCH 7 10 +3 (+42.86%)
verifier_linked_scalars.bpf.o scalars_sub_neg_imm failure success MISMATCH 7 10 +3 (+42.86%)
iters.bpf.o iter_obfuscate_counter success success MATCH 83 119 +36 (+43.37%)
bpf_cubic.bpf.o bpf_cubic_acked success success MATCH 243 430 +187 (+76.95%)
====================
Link: https://patch.msgid.link/20260204151741.2678118-1-puranjay@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Add tests for linked register tracking with negative offsets, BPF_SUB,
and alu32. These test for all edge cases like overflows, etc.
Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20260204151741.2678118-3-puranjay@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Previously, the verifier only tracked positive constant deltas between
linked registers using BPF_ADD. This limitation meant patterns like:
r1 = r0;
r1 += -4;
if r1 s>= 0 goto l0_%=; // r1 >= 0 implies r0 >= 4
// verifier couldn't propagate bounds back to r0
if r0 != 0 goto l0_%=;
r0 /= 0; // Verifier thinks this is reachable
l0_%=:
Similar limitation exists for 32-bit registers.
With this change, the verifier can now track negative deltas in reg->off
enabling bound propagation for the above pattern.
For alu32, we make sure the destination register has the upper 32 bits
as 0s before creating the link. BPF_ADD_CONST is split into
BPF_ADD_CONST64 and BPF_ADD_CONST32, the latter is used in case of alu32
and sync_linked_regs uses this to zext the result if known_reg has this
flag.
Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20260204151741.2678118-2-puranjay@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Tianci Cao says:
====================
bpf: Add bitwise tracking for BPF_END
Add bitwise tracking (tnum analysis) for BPF_END (`bswap(16|32|64)`,
`be(16|32|64)`, `le(16|32|64)`) operations. Please see commit log of
1/2 for more details.
v3:
- Resend to fix a version control error in v2.
- The rest of the changes are identical to v2.
v2 (incorrect): https://lore.kernel.org/bpf/20260204091146.52447-1-ziye@zju.edu.cn/
- Refactored selftests using BSWAP_RANGE_TEST macro to eliminate code
duplication and improve maintainability. (Eduard)
- Simplified test names. (Eduard)
- Reduced excessive comments in test cases. (Eduard)
- Added more comments to explain BPF_END's special handling of zext_32_to_64.
v1: https://lore.kernel.org/bpf/20260202133536.66207-1-ziye@zju.edu.cn/
====================
Link: https://patch.msgid.link/20260204111503.77871-1-ziye@zju.edu.cn
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Now BPF_END has bitwise tracking support. This patch adds selftests to
cover various cases of BPF_END (`bswap(16|32|64)`, `be(16|32|64)`,
`le(16|32|64)`) with bitwise propagation.
This patch is based on existing `verifier_bswap.c`, and add several
types of new tests:
1. Unconditional byte swap operations:
- bswap16/bswap32/bswap64 with unknown bytes
2. Endian conversion operations (architecture-aware):
- be16/be32/be64: convert to big-endian
* on little-endian: do swap
* on big-endian: truncation (16/32-bit) or no-op (64-bit)
- le16/le32/le64: convert to little-endian
* on big-endian: do swap
* on little-endian: truncation (16/32-bit) or no-op (64-bit)
Each test simulates realistic networking scenarios where a value is
masked with unknown bits (e.g., var_off=(0x0; 0x3f00), range=[0,0x3f00]),
then byte-swapped, and the verifier must prove the result stays within
expected bounds.
Specifically, these selftests are based on dead code elimination:
If the BPF verifier can precisely track bitwise through byte swap
operations, it can prune the trap path (invalid memory access) that
should be unreachable, allowing the program to pass verification.
If bitwise tracking is incorrect, the verifier cannot prove the trap
is unreachable, causing verification failure.
The tests use preprocessor conditionals (#ifdef __BYTE_ORDER__) to
verify correct behavior on both little-endian and big-endian
architectures, and require Clang 18+ for bswap instruction support.
Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Co-developed-by: Yazhou Tang <tangyazhou518@outlook.com>
Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20260204111503.77871-3-ziye@zju.edu.cn
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
This patch implements bitwise tracking (tnum analysis) for BPF_END
(byte swap) operation.
Currently, the BPF verifier does not track value for BPF_END operation,
treating the result as completely unknown. This limits the verifier's
ability to prove safety of programs that perform endianness conversions,
which are common in networking code.
For example, the following code pattern for port number validation:
int test(struct pt_regs *ctx) {
__u64 x = bpf_get_prandom_u32();
x &= 0x3f00; // Range: [0, 0x3f00], var_off: (0x0; 0x3f00)
x = bswap16(x); // Should swap to range [0, 0x3f], var_off: (0x0; 0x3f)
if (x > 0x3f) goto trap;
return 0;
trap:
return *(u64 *)NULL; // Should be unreachable
}
Currently generates verifier output:
1: (54) w0 &= 16128 ; R0=scalar(smin=smin32=0,smax=umax=smax32=umax32=16128,var_off=(0x0; 0x3f00))
2: (d7) r0 = bswap16 r0 ; R0=scalar()
3: (25) if r0 > 0x3f goto pc+2 ; R0=scalar(smin=smin32=0,smax=umax=smax32=umax32=63,var_off=(0x0; 0x3f))
Without this patch, even though the verifier knows `x` has certain bits
set, after bswap16, it loses all tracking information and treats port
as having a completely unknown value [0, 65535].
According to the BPF instruction set[1], there are 3 kinds of BPF_END:
1. `bswap(16|32|64)`: opcode=0xd7 (BPF_END | BPF_ALU64 | BPF_TO_LE)
- do unconditional swap
2. `le(16|32|64)`: opcode=0xd4 (BPF_END | BPF_ALU | BPF_TO_LE)
- on big-endian: do swap
- on little-endian: truncation (16/32-bit) or no-op (64-bit)
3. `be(16|32|64)`: opcode=0xdc (BPF_END | BPF_ALU | BPF_TO_BE)
- on little-endian: do swap
- on big-endian: truncation (16/32-bit) or no-op (64-bit)
Since BPF_END operations are inherently bit-wise permutations, tnum
(bitwise tracking) offers the most efficient and precise mechanism
for value analysis. By implementing `tnum_bswap16`, `tnum_bswap32`,
and `tnum_bswap64`, we can derive exact `var_off` values concisely,
directly reflecting the bit-level changes.
Here is the overview of changes:
1. In `tnum_bswap(16|32|64)` (kernel/bpf/tnum.c):
Call `swab(16|32|64)` function on the value and mask of `var_off`, and
do truncation for 16/32-bit cases.
2. In `adjust_scalar_min_max_vals` (kernel/bpf/verifier.c):
Call helper function `scalar_byte_swap`.
- Only do byte swap when
* alu64 (unconditional swap) OR
* switching between big-endian and little-endian machines.
- If need do byte swap:
* Firstly call `tnum_bswap(16|32|64)` to update `var_off`.
* Then reset the bound since byte swap scrambles the range.
- For 16/32-bit cases, truncate dst register to match the swapped size.
This enables better verification of networking code that frequently uses
byte swaps for protocol processing, reducing false positive rejections.
[1] https://www.kernel.org/doc/Documentation/bpf/standardization/instruction-set.rst
Co-developed-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Signed-off-by: Shenghao Yuan <shenghaoyuan0928@163.com>
Co-developed-by: Yazhou Tang <tangyazhou518@outlook.com>
Signed-off-by: Yazhou Tang <tangyazhou518@outlook.com>
Signed-off-by: Tianci Cao <ziye@zju.edu.cn>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20260204111503.77871-2-ziye@zju.edu.cn
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Alexei Starovoitov says:
====================
bpf: Fix conditions when timer/wq can be called
From: Alexei Starovoitov <ast@kernel.org>
v2->v3:
- Add missing refcount_put
- Detect recursion of indiviual async_cb
v2: https://lore.kernel.org/bpf/20260204040834.22263-4-alexei.starovoitov@gmail.com/
v1->v2:
- Add a recursion check
v1: https://lore.kernel.org/bpf/20260204030927.171-1-alexei.starovoitov@gmail.com/
====================
Link: https://patch.msgid.link/20260204055147.54960-1-alexei.starovoitov@gmail.com
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
|
|
Strengthen timer_start_deadlock test and check for recursion now
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20260204055147.54960-5-alexei.starovoitov@gmail.com
|
|
Do not schedule timer/wq operation on a cpu that is in irq_work
callback that is processing async_cmds queue.
Otherwise the following loop is possible:
bpf_timer_start() -> bpf_async_schedule_op() -> irq_work_queue().
irqrestore -> bpf_async_irq_worker() -> tracepoint -> bpf_timer_start().
Fixes: 1bfbc267ec91 ("bpf: Enable bpf_timer and bpf_wq in any context")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20260204055147.54960-4-alexei.starovoitov@gmail.com
|
|
Add a testcase that checks that deadlock avoidance is working
as expected.
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20260204055147.54960-3-alexei.starovoitov@gmail.com
|
|
Though hrtimer_start/cancel() inlines all of the smaller helpers in
hrtimer.c and only call timerqueue_add/del() from lib/timerqueue.c where
everything is not traceable and not kprobe-able (because all files in
lib/ are not traceable), there are tracepoints within hrtimer that are
called with locks held. Therefore prevent the deadlock by tightening
conditions when timer/wq can be called synchronously.
hrtimer/wq are using raw_spin_lock_irqsave(), so irqs_disabled() is enough.
Fixes: 1bfbc267ec91 ("bpf: Enable bpf_timer and bpf_wq in any context")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20260204055147.54960-2-alexei.starovoitov@gmail.com
|
|
The CephFS kernel client has regression starting from 6.18-rc1.
We have issue in ceph_mds_auth_match() if fs_name == NULL:
const char fs_name = mdsc->fsc->mount_options->mds_namespace;
...
if (auth->match.fs_name && strcmp(auth->match.fs_name, fs_name)) {
/ fsname mismatch, try next one */
return 0;
}
Patrick Donnelly suggested that: In summary, we should definitely start
decoding `fs_name` from the MDSMap and do strict authorizations checks
against it. Note that the `-o mds_namespace=foo` should only be used for
selecting the file system to mount and nothing else. It's possible
no mds_namespace is specified but the kernel will mount the only
file system that exists which may have name "foo".
This patch reworks ceph_mdsmap_decode() and namespace_equals() with
the goal of supporting the suggested concept. Now struct ceph_mdsmap
contains m_fs_name field that receives copy of extracted FS name
by ceph_extract_encoded_string(). For the case of "old" CephFS file
systems, it is used "cephfs" name.
[ idryomov: replace redundant %*pE with %s in ceph_mdsmap_decode(),
get rid of a series of strlen() calls in ceph_namespace_match(),
drop changes to namespace_equals() body to avoid treating empty
mds_namespace as equal, drop changes to ceph_mdsc_handle_fsmap()
as namespace_equals() isn't an equivalent substitution there ]
Cc: stable@vger.kernel.org
Fixes: 22c73d52a6d0 ("ceph: fix multifs mds auth caps issue")
Link: https://tracker.ceph.com/issues/73886
Signed-off-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
Reviewed-by: Patrick Donnelly <pdonnell@ibm.com>
Tested-by: Patrick Donnelly <pdonnell@ibm.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
|
|
Merge cpupower utility updates for 6.20-rc1/7.0-rc1:
- Fix miscellaneous problems in cpupower (Kaushlendra Kumar):
* idle_monitor: Fix incorrect value logged after stop
* Fix inverted APERF capability check
* Use strcspn() to strip trailing newline
* Reset errno before strtoull()
* Show C0 in idle-info dump
- Improve cpupower installation procedure by making the systemd step
optional and allowing users to disable the installation of systemd's
unit file (João Marcos Costa)
* pm-tools:
cpupower: make systemd unit installation optional
tools/power cpupower: Show C0 in idle-info dump
tools/power cpupower: Reset errno before strtoull()
tools/cpupower: Use strcspn() to strip trailing newline
tools/cpupower: Fix inverted APERF capability check
cpupower: idle_monitor: fix incorrect value logged after stop
|
|
Merge power capping updates, OPP (operating performance points) updates
and energy model management documentation updates for 6.20-rc1/7.0-rc1:
- Add PL4 support for Ice Lake to the Intel RAPL power capping
driver (Daniel Tang)
- Replace sprintf() with sysfs_emit() in power capping sysfs show
functions (Sumeet Pawnikar)
- Make dev_pm_opp_get_level() return value match the documentation
after a previous update of the latter (Aleks Todorov)
- Use scoped for each OF child loop in the OPP code (Krzysztof
Kozlowski)
- Fix a bug in an example code snippet and correct typos in the energy
model management documentation (Patrick Little)
* pm-powercap:
powercap: intel_rapl: Add PL4 support for Ice Lake
powercap: Replace sprintf() with sysfs_emit() in sysfs show functions
* pm-opp:
OPP: Return correct value in dev_pm_opp_get_level
OPP: of: Simplify with scoped for each OF child loop
* pm-em:
PM: EM: Documentation: Fix bug in example code snippet
Documentation: Fix typos in energy model documentation
|
|
Merge updates related to runtime PM for 6.20-rc1/7.0-rc1:
- Make several drivers discard pm_runtime_put() return value in
preparation for converting that function to a void one (Rafael
Wysocki)
* pm-runtime:
drm: Discard pm_runtime_put() return value
genirq/chip: Change irq_chip_pm_put() return type to void
scsi: ufs: core: Discard pm_runtime_put() return values
platform/chrome: cros_hps_i2c: Discard pm_runtime_put() return value
coresight: Discard pm_runtime_put() return values
hwspinlock: omap: Discard pm_runtime_put() return value
watchdog: rzv2h_wdt: Discard pm_runtime_put() return value
watchdog: rz: Discard pm_runtime_put() return values
media: ccs: Discard pm_runtime_put() return value
drm/imagination: Discard pm_runtime_put() return value
USB: core: Discard pm_runtime_put() return value
|
|
Merge updates related to system suspend and hibernation for
6.20-rc1/7.0-rc1:
- Stop flagging the PM runtime workqueue as freezable to avoid system
suspend and resume deadlocks in subsystems that assume asynchronous
runtime PM to work during system-wide PM transitions (Rafael Wysocki)
- Drop redundant NULL pointer checks before acomp_request_free() from
the hibernation code handling image saving (Rafael Wysocki)
- Update wakeup_sources_walk_start() to handle empty lists of wakeup
sources as appropriate (Samuel Wu)
- Make dev_pm_clear_wake_irq() check the power.wakeirq value under
power.lock to avoid race conditions (Gui-Dong Han)
- Avoid bit field races related to power.work_in_progress in the core
device suspend code (Xuewen Yan)
* pm-sleep:
PM: sleep: core: Avoid bit field races related to work_in_progress
PM: sleep: wakeirq: harden dev_pm_clear_wake_irq() against races
PM: wakeup: Handle empty list in wakeup_sources_walk_start()
PM: hibernate: Drop NULL pointer checks before acomp_request_free()
PM: sleep: Do not flag runtime PM workqueue as freezable
|
|
Merge cpuidle updates for 6.20-rc1/7.0-rc1:
- Add a command line option to adjust the C-states table in the
intel_idle driver, remove the 'preferred_cstates' module parameter
from it, add C-states validation to it and clean it up (Artem
Bityutskiy)
- Make the menu cpuidle governor always check the time till the closest
timer event when the scheduler tick has been stopped to prevent it
from mistakenly selecting the deepest available idle state (Rafael
Wysocki)
- Update the teo cpuidle governor to avoid making suboptimal decisions
in certain corner cases and generally improve idle state selection
accuracy (Rafael Wysocki)
- Remove an unlikely() annotation on the early-return condition in
menu_select() that leads to branch misprediction 100% of the time
on systems with only 1 idle state enabled, like ARM64 servers (Breno
Leitao)
- Add Christian Loehle to MAINTAINERS as a cpuidle reviewer (Christian
Loehle)
* pm-cpuidle:
cpuidle: governors: teo: Refine intercepts-based idle state lookup
cpuidle: governors: teo: Adjust the classification of wakeup events
cpuidle: governors: teo: Refine tick_intercepts vs total events check
cpuidle: governors: teo: Avoid fake intercepts produced by tick
cpuidle: governors: teo: Avoid selecting states with zero-size bins
cpuidle: governors: menu: Always check timers with tick stopped
MAINTAINERS: Add myself as cpuidle reviewer
cpuidle: menu: Remove incorrect unlikely() annotation
intel_idle: Add C-states validation
intel_idle: Add cmdline option to adjust C-states table
intel_idle: Initialize sysfs after cpuidle driver initialization
intel_idle: Remove the 'preferred_cstates' parameter
intel_idle: Remove unused driver version constant
|
|
Pull KVM fixes from Paolo Bonzini:
- Fix a bug where AVIC is incorrectly inhibited when running with
x2AVIC disabled via module param (or on a system without x2AVIC)
- Fix a dangling device posted IRQs bug by explicitly checking if the
irqfd is still active (on the list) when handling an eventfd signal,
instead of zeroing the irqfd's routing information when the irqfd is
deassigned.
Zeroing the irqfd's routing info causes arm64 and x86's to not
disable posting for the IRQ (kvm_arch_irq_bypass_del_producer() looks
for an MSI), incorrectly leaving the IRQ in posted mode (and leading
to use-after-free and memory leaks on AMD in particular).
This is both the most pressing and scariest, but it's been in -next
for a while.
- Disable FORTIFY_SOURCE for KVM selftests to prevent the compiler from
generating calls to the checked versions of memset() and friends,
which leads to unexpected page faults in guest code due e.g.
__memset_chk@plt not being resolved.
- Explicitly configure the supported XSS capabilities from within
{svm,vmx}_set_cpu_caps() to fix a bug where VMX will compute the
reference VMCS configuration with SHSTK and IBT enabled, but then
compute each CPUs local config with SHSTK and IBT disabled if not all
CET xfeatures are enabled, e.g. if the kernel is built with
X86_KERNEL_IBT=n.
The mismatch in features results in differing nVMX setting, and
ultimately causes kvm-intel.ko to refuse to load with nested=1.
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: x86: Explicitly configure supported XSS from {svm,vmx}_set_cpu_caps()
KVM: selftests: Add -U_FORTIFY_SOURCE to avoid some unpredictable test failures
KVM: x86: Assert that non-MSI doesn't have bypass vCPU when deleting producer
KVM: Don't clobber irqfd routing type when deassigning irqfd
KVM: SVM: Check vCPU ID against max x2AVIC ID if and only if x2AVIC is enabled
|
|
Merge updates of Intel thermal drivers for 6.20/7.0:
- Add Panther Lake, Wildcat Lake and Nova Lake processor IDs to the
list of supported processors in the intel_tcc_cooling thermal
driver (Srinivas Pandruvada)
- Drop unnecessary explicit driver data clearing on removal from the
intel_pch_thermal driver (Kaushlendra Kumar)
- Add support for "slow" workload type hints to the int340x
processor_thermal driver and enable it on the Panther Lake
platform (Srinivas Pandruvada)
- Use sysfs_emit{_at}() in sysfs show functions in Intel thermal
drivers (Thorsten Blum)
- Update the x86_pkg_temp_thermal driver to handle THERMAL_TEMP_INVALID
that can be passed to it via sysfs as expected (Rafael Wysocki)
- Drop a redundant local variable from the intel_tcc_cooling thermal
driver and fix a kerneldoc comment typo in the TCC library (Sumeet
Pawnikar)
* thermal-intel:
drivers: thermal: intel: tcc_cooling: Drop redundant local variable
thermal: intel: x86_pkg_temp_thermal: Handle invalid temperature
thermal: intel: Use sysfs_emit() in a sysfs show function
thermal: intel: fix typo "nagative" in comment for cpu argument
thermal: intel: int340x: Use sysfs_emit{_at}() in sysfs show functions
thermal: intel: selftests: workload_hint: Support slow workload hints
thermal: int340x: processor_thermal: Enable slow workload type hints
thermal: intel: intel_pch_thermal: Drop explicit driver data clearing
thermal: intel: intel_tcc_cooling: Add CPU models in the support list
|
|
Preserve original relative order of anonymous or same-named
types to improve the consistency.
No functional changes.
Signed-off-by: Donglin Peng <pengdonglin@xiaomi.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20260202120114.3707141-1-dolinux.peng@gmail.com
|
|
Kuniyuki Iwashima says:
====================
bpf: Misc changes around AF_UNIX.
Patch 1 adapts sk_is_XXX() helpers in __cgroup_bpf_run_filter_sock_addr().
Patch 2 removes an unnecessary sk_fullsock() in bpf_skc_to_unix_sock().
====================
Link: https://patch.msgid.link/20260203213442.682838-1-kuniyu@google.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
|
|
AF_UNIX does not use TCP_NEW_SYN_RECV nor TCP_TIME_WAIT and
checking sk->sk_family is sufficient.
Let's remove sk_fullsock() and use sk_is_unix() in
bpf_skc_to_unix_sock().
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://patch.msgid.link/20260203213442.682838-3-kuniyu@google.com
|
|
sk->sk_family should be read with READ_ONCE() in
__cgroup_bpf_run_filter_sock_addr() due to IPV6_ADDRFORM.
Also, the comment there is a bit stale since commit 859051dd165e
("bpf: Implement cgroup sockaddr hooks for unix sockets"), and the
kdoc has the same comment.
Let's use sk_is_inet() and sk_is_unix() and remove the comment.
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://patch.msgid.link/20260203213442.682838-2-kuniyu@google.com
|
|
Final KVM fixes for 6.19:
- Fix a bug where AVIC is incorrectly inhibited when running with x2AVIC
disabled via module param (or on a system without x2AVIC).
- Fix a dangling device posted IRQs bug by explicitly checking if the irqfd is
still active (on the list) when handling an eventfd signal, instead of
zeroing the irqfd's routing information when the irqfd is deassigned.
Zeroing the irqfd's routing info causes arm64 and x86's to not disable
posting for the IRQ (kvm_arch_irq_bypass_del_producer() looks for an MSI),
incorrectly leaving the IRQ in posted mode (and leading to use-after-free
and memory leaks on AMD in particular).
This is both the most pressing and scariest, but it's been in -next for
a while.
- Disable FORTIFY_SOURCE for KVM selftests to prevent the compiler from
generating calls to the checked versions of memset() and friends, which
leads to unexpected page faults in guest code due e.g. __memset_chk@plt
not being resolved.
- Explicitly configure the support XSS from within {svm,vmx}_set_cpu_caps() to
fix a bug where VMX will compute the reference VMCS configuration with SHSTK
and IBT enabled, but then compute each CPUs local config with SHSTK and IBT
disabled if not all CET xfeatures are enabled, e.g. if the kernel is built
with X86_KERNEL_IBT=n. The mismatch in features results in differing nVMX
setting, and ultimately causes kvm-intel.ko to refuse to load with nested=1.
|
|
The second kill_bdev() call in set_blocksize() is redundant as the first
call already clears all buffers and pagecache, and locks prevent new
pagecache creation between the calls.
Signed-off-by: Yang Xiuwei <yangxiuwei@kylinos.cn>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
Pull SoC fixes from Arnd Bergmann:
"Shawn Guo is moving on from maintaining the NXP i.MX platform and
hands over to Frank Li. Shawn has maintained the platform for 15 years
after initially upstreaming support for i.MX6 and i.MX23/28, and his
work has helped make this the most important industrial embedded Linux
platform. Roughly one out of five devicetree files in mainline kernels
are for the wider i.MX platform. Many thanks to Shawn for the taking
care of the platform all these years!
There are also two additional updates for the MAINTAINERS file, and a
fix for error handling in the qualcomm smem driver"
* tag 'soc-fixes-6.19-3' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc:
MAINTAINERS: Change Sudeep Holla's email address
MAINTAINERS: Add myself as maintainer of hisi_soc_hha
soc: qcom: smem: fix qcom_smem_is_available and check if __smem is valid
MAINTAINERS: Replace Shawn with Frank as i.MX platform maintainer
|
|
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus
ASoC: Fixes for v6.19
A bunch more small fixes here, plus some more of the constant stream of
quirks. The most notable change here is Richard's change to the cs_dsp
code for the KUnit tests which is relatively large, mostly due to
boilerplate. The tests were triggering large numbers of error messages
as part of verifying that problems with input data are appropriately
detected which in turn caused runtime issues for the framework due to
the performance impact of pushing the logging out, while the logging is
valuable in normal operation it's basically useless while doing tests
designed to trigger it so rate limiting is an appropriate fix.
|
|
Restrict D3Cold disablement for BMG to unsupported NUC platforms,
instead of disabling it on all platforms.
Signed-off-by: Karthik Poosa <karthik.poosa@intel.com>
Fixes: 3e331a6715ee ("drm/xe/pm: Temporarily disable D3Cold on BMG")
Link: https://patch.msgid.link/20260123173238.1642383-1-karthik.poosa@intel.com
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
(cherry picked from commit 39125eaf8863ab09d70c4b493f58639b08d5a897)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
|
|
Correct the function name in the kerneldoc.
It is for below warning:
"Warning: drivers/gpu/drm/xe/xe_tlb_inval_job.c:210 expecting prototype for
xe_tlb_inval_alloc_dep(). Prototype was for xe_tlb_inval_job_alloc_dep()
instead"
Fixes: 15366239e2130 ("drm/xe: Decouple TLB invalidations from GT")
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Shuicheng Lin <shuicheng.lin@intel.com>
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patch.msgid.link/20260129233834.419977-8-shuicheng.lin@intel.com
(cherry picked from commit 9f9c117ac566cb567dd56cc5b7564c45653f7a2a)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
|
|
Correct the function name in the kerneldoc.
It is for below warning:
"Warning: drivers/gpu/drm/xe/xe_tlb_inval.c:136 expecting prototype for
xe_gt_tlb_inval_init(). Prototype was for xe_gt_tlb_inval_init_early()
instead"
v2: add () for the function. (Michal)
Fixes: db16f9d90c1d9 ("drm/xe: Split TLB invalidation code in frontend and backend")
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Shuicheng Lin <shuicheng.lin@intel.com>
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patch.msgid.link/20260129233834.419977-7-shuicheng.lin@intel.com
(cherry picked from commit 0651dbb9d6a72e99569576fbec4681fd8160d161)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
|
|
Correct the function name in the kerneldoc.
It is for below warning:
"Warning: drivers/gpu/drm/xe/xe_migrate.c:1262 expecting prototype for
xe_get_migrate_exec_queue(). Prototype was for xe_migrate_exec_queue()
instead"
Fixes: 916ee4704a865 ("drm/xe/vf: Register CCS read/write contexts with Guc")
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Shuicheng Lin <shuicheng.lin@intel.com>
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Link: https://patch.msgid.link/20260129233834.419977-6-shuicheng.lin@intel.com
(cherry picked from commit 9fd8da717934f05125b9ba6782622c459a368dc0)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
|
|
The topology query helper advanced the user pointer by the size
of the pointer, not the size of the structure. This can misalign
the output blob and corrupt the following mask. Fix the increment
to use sizeof(*topo).
There is no issue currently, as sizeof(*topo) happens to be equal
to sizeof(topo) on 64-bit systems (both evaluate to 8 bytes).
Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
Signed-off-by: Shuicheng Lin <shuicheng.lin@intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patch.msgid.link/20260130043907.465128-2-shuicheng.lin@intel.com
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
(cherry picked from commit c2a6859138e7f73ad904be17dd7d1da6cc7f06b3)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
|
|
There is a spelling mistake in a pr_err message. Fix it.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Acked-by: Lorenzo Pieralisi <lpieralisi@kernel.org>
Link: https://patch.msgid.link/20260203210735.5036-1-colin.i.king@gmail.com
|
|
Some io_uring files are missing SPDX-License-Identifier lines.
Add lines with GPL-2.0 license IDs to these files.
Signed-off-by: Tim Bird <tim.bird@sony.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
In all of the system suspend transition phases, the async processing of
a device may be carried out in parallel with power.work_in_progress
updates for the device's parent or suppliers and if it touches bit
fields from the same group (for example, power.must_resume or
power.wakeup_path), bit field corruption is possible.
To avoid that, turn work_in_progress in struct dev_pm_info into a proper
bool field and relocate it to save space.
Fixes: aa7a9275ab81 ("PM: sleep: Suspend async parents after suspending children")
Fixes: 443046d1ad66 ("PM: sleep: Make suspend of devices more asynchronous")
Signed-off-by: Xuewen Yan <xuewen.yan@unisoc.com>
Closes: https://lore.kernel.org/linux-pm/20260203063459.12808-1-xuewen.yan@unisoc.com/
Cc: All applicable <stable@vger.kernel.org>
[ rjw: Added subject and changelog ]
Link: https://patch.msgid.link/CAB8ipk_VX2VPm706Jwa1=8NSA7_btWL2ieXmBgHr2JcULEP76g@mail.gmail.com
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
https://git.kernel.org/pub/scm/linux/kernel/git/iwlwifi/iwlwifi-next
Miri Korenblit says:
====================
iwlwifi fixes
- Cancel mlo_scan_work on disassoc
- Pause TCM work on suspend
====================
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|