diff options
author | Oleg Nesterov <oleg@redhat.com> | 2014-01-21 15:50:01 -0800 |
---|---|---|
committer | Mandar Padmawar <mpadmawar@nvidia.com> | 2014-06-18 21:33:22 -0700 |
commit | b24564b4e76b16bed3c19dccf59e974adf64aeca (patch) | |
tree | d8e60f9688324c12a8b831a617f8bc8554e1b5fe /mm | |
parent | 84ce23d27df5d272da0226d6ca4a40cffa348621 (diff) |
oom_kill: add rcu_read_lock() into find_lock_task_mm()
find_lock_task_mm() expects it is called under rcu or tasklist lock, but
it seems that at least oom_unkillable_task()->task_in_mem_cgroup() and
mem_cgroup_out_of_memory()->oom_badness() can call it lockless.
Perhaps we could fix the callers, but this patch simply adds rcu lock
into find_lock_task_mm(). This also allows to simplify a bit one of its
callers, oom_kill_process().
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: Sergey Dyasly <dserrg@gmail.com>
Cc: Sameer Nanda <snanda@chromium.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mandeep Singh Baines <msb@chromium.org>
Cc: "Ma, Xindong" <xindong.ma@intel.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Cc: "Tu, Xiaobing" <xiaobing.tu@intel.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change-Id: I5f214dec13b34e05c4b5fc8bc29df3ab7400efa1
Reviewed-on: http://git-master/r/421705
Reviewed-by: Sri Krishna Chowdary <schowdary@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Kerwin Wan <kerwinw@nvidia.com>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/oom_kill.c | 12 |
1 files changed, 8 insertions, 4 deletions
diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 71da6fa9b186..fc707b3d22ab 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -102,14 +102,19 @@ struct task_struct *find_lock_task_mm(struct task_struct *p) { struct task_struct *t; + rcu_read_lock(); + for_each_thread(p, t) { task_lock(t); if (likely(t->mm)) - return t; + goto found; task_unlock(t); } + t = NULL; +found: + rcu_read_unlock(); - return NULL; + return t; } /* return true if the task is not adequate as candidate victim task. */ @@ -461,10 +466,8 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order, } read_unlock(&tasklist_lock); - rcu_read_lock(); p = find_lock_task_mm(victim); if (!p) { - rcu_read_unlock(); put_task_struct(victim); return; } else if (victim != p) { @@ -490,6 +493,7 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order, * That thread will now get access to memory reserves since it has a * pending fatal signal. */ + rcu_read_lock(); for_each_process(p) if (p->mm == mm && !same_thread_group(p, victim) && !(p->flags & PF_KTHREAD)) { |