diff options
| author | Tejun Heo <tj@kernel.org> | 2026-04-27 14:16:35 -1000 |
|---|---|---|
| committer | Tejun Heo <tj@kernel.org> | 2026-05-04 09:06:03 -1000 |
| commit | ff9eda4ea906b1f02fc260ddc42d2d9bd736a49c (patch) | |
| tree | e3dbebcc3fd61a806e97ae2cecf15e81e6291a5f /rust/kernel | |
| parent | 60f21a2649308bbd84919ba6656d5ccd660953cf (diff) | |
sched_ext: Skip past-sched_ext_dead() tasks in scx_task_iter_next_locked()
scx_task_iter's cgroup-scoped mode can return tasks whose
sched_ext_dead() has already completed: cgroup_task_dead() removes
from cset->tasks after sched_ext_dead() in finish_task_switch() and is
irq-work deferred on PREEMPT_RT. The global mode is fine -
sched_ext_dead() removes from scx_tasks via list_del_init() first.
Callers (sub-sched enable prep/abort/apply, scx_sub_disable(),
scx_fail_parent()) assume returned tasks are still on @sch and trip
WARN_ON_ONCE() or operate on torn-down state otherwise.
Set %SCX_TASK_OFF_TASKS in sched_ext_dead() under @p's rq lock and
have scx_task_iter_next_locked() skip flagged tasks under the same
lock. Setter and reader serialize on the per-task rq lock - no race.
Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'rust/kernel')
0 files changed, 0 insertions, 0 deletions
