diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2026-05-05 15:22:04 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2026-05-05 15:22:04 -0700 |
| commit | de95ad90fb19e4b7778a0c27115a4639c7c8b186 (patch) | |
| tree | 1d74306e2964932ccd0b787ea743dc74587683f5 /rust/kernel | |
| parent | 50fb0bcc9d7da23e0f0fd5359b4f9ceb0aa337d2 (diff) | |
| parent | b34c82777a2c0648ee053595f4b290fd5249b093 (diff) | |
Merge tag 'sched_ext-for-7.1-rc2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext
Pull sched_ext fixes from Tejun Heo:
- Fix idle CPU selection returning prev_cpu outside the task's cpus_ptr
when the BPF caller's allowed mask was wider. Stable backport.
- Two opposite-direction gaps in scx_task_iter's cgroup-scoped mode
versus the global mode:
- Tasks past exit_signals() are filtered by the cgroup walk but kept
by global. Sub-scheduler enable abort leaked __scx_init_task()
state. Add a CSS_TASK_ITER_WITH_DEAD flag to cgroup's task
iterator (scx_task_iter is its only user) and use it.
- Tasks past sched_ext_dead() are still returned, tripping
WARN_ON_ONCE() in callers or making them touch torn-down state.
Mark and skip under the per-task rq lock.
* tag 'sched_ext-for-7.1-rc2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext:
sched_ext: idle: Recheck prev_cpu after narrowing allowed mask
sched_ext: Skip past-sched_ext_dead() tasks in scx_task_iter_next_locked()
cgroup, sched_ext: Include exiting tasks in cgroup iter
Diffstat (limited to 'rust/kernel')
0 files changed, 0 insertions, 0 deletions
