diff options
author | Tejun Heo <tj@kernel.org> | 2012-12-20 15:05:40 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-12-20 17:40:20 -0800 |
commit | 154b454edaf6d94a69016db6c342c57fa935bbe9 (patch) | |
tree | 504eb789a9e802af9f534cc3a377f1efb29171cb /mm | |
parent | 495e9d84607cda966ba6d223d5eb9df0070cd21a (diff) |
memcg: don't register hotcpu notifier from ->css_alloc()
Commit 648bb56d076b ("cgroup: lock cgroup_mutex in cgroup_init_subsys()")
made cgroup_init_subsys() grab cgroup_mutex before invoking
->css_alloc() for the root css. Because memcg registers hotcpu notifier
from ->css_alloc() for the root css, this introduced circular locking
dependency between cgroup_mutex and cpu hotplug.
Fix it by moving hotcpu notifier registration to a subsys initcall.
======================================================
[ INFO: possible circular locking dependency detected ]
3.7.0-rc4-work+ #42 Not tainted
-------------------------------------------------------
bash/645 is trying to acquire lock:
(cgroup_mutex){+.+.+.}, at: [<ffffffff8110c5b7>] cgroup_lock+0x17/0x20
but task is already holding lock:
(cpu_hotplug.lock){+.+.+.}, at: [<ffffffff8109300f>] cpu_hotplug_begin+0x2f/0x60
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (cpu_hotplug.lock){+.+.+.}:
lock_acquire+0x97/0x1e0
mutex_lock_nested+0x61/0x3b0
get_online_cpus+0x3c/0x60
rebuild_sched_domains_locked+0x1b/0x70
cpuset_write_resmask+0x298/0x2c0
cgroup_file_write+0x1ef/0x300
vfs_write+0xa8/0x160
sys_write+0x52/0xa0
system_call_fastpath+0x16/0x1b
-> #0 (cgroup_mutex){+.+.+.}:
__lock_acquire+0x14ce/0x1d20
lock_acquire+0x97/0x1e0
mutex_lock_nested+0x61/0x3b0
cgroup_lock+0x17/0x20
cpuset_handle_hotplug+0x1b/0x560
cpuset_update_active_cpus+0xe/0x10
cpuset_cpu_inactive+0x47/0x50
notifier_call_chain+0x66/0x150
__raw_notifier_call_chain+0xe/0x10
__cpu_notify+0x20/0x40
_cpu_down+0x7e/0x2f0
cpu_down+0x36/0x50
store_online+0x5d/0xe0
dev_attr_store+0x18/0x30
sysfs_write_file+0xe0/0x150
vfs_write+0xa8/0x160
sys_write+0x52/0xa0
system_call_fastpath+0x16/0x1b
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(cpu_hotplug.lock);
lock(cgroup_mutex);
lock(cpu_hotplug.lock);
lock(cgroup_mutex);
*** DEADLOCK ***
5 locks held by bash/645:
#0: (&buffer->mutex){+.+.+.}, at: [<ffffffff8123bab8>] sysfs_write_file+0x48/0x150
#1: (s_active#42){.+.+.+}, at: [<ffffffff8123bb38>] sysfs_write_file+0xc8/0x150
#2: (x86_cpu_hotplug_driver_mutex){+.+...}, at: [<ffffffff81079277>] cpu_hotplug_driver_lock+0x1
+7/0x20
#3: (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff81093157>] cpu_maps_update_begin+0x17/0x20
#4: (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff8109300f>] cpu_hotplug_begin+0x2f/0x60
stack backtrace:
Pid: 645, comm: bash Not tainted 3.7.0-rc4-work+ #42
Call Trace:
print_circular_bug+0x28e/0x29f
__lock_acquire+0x14ce/0x1d20
lock_acquire+0x97/0x1e0
mutex_lock_nested+0x61/0x3b0
cgroup_lock+0x17/0x20
cpuset_handle_hotplug+0x1b/0x560
cpuset_update_active_cpus+0xe/0x10
cpuset_cpu_inactive+0x47/0x50
notifier_call_chain+0x66/0x150
__raw_notifier_call_chain+0xe/0x10
__cpu_notify+0x20/0x40
_cpu_down+0x7e/0x2f0
cpu_down+0x36/0x50
store_online+0x5d/0xe0
dev_attr_store+0x18/0x30
sysfs_write_file+0xe0/0x150
vfs_write+0xa8/0x160
sys_write+0x52/0xa0
system_call_fastpath+0x16/0x1b
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memcontrol.c | 14 |
1 files changed, 13 insertions, 1 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f3009b4bae51..09255ec8159c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6090,7 +6090,6 @@ mem_cgroup_css_alloc(struct cgroup *cont) &per_cpu(memcg_stock, cpu); INIT_WORK(&stock->work, drain_local_stock); } - hotcpu_notifier(memcg_cpu_hotplug_callback, 0); } else { parent = mem_cgroup_from_cont(cont->parent); memcg->use_hierarchy = parent->use_hierarchy; @@ -6756,6 +6755,19 @@ struct cgroup_subsys mem_cgroup_subsys = { .use_id = 1, }; +/* + * The rest of init is performed during ->css_alloc() for root css which + * happens before initcalls. hotcpu_notifier() can't be done together as + * it would introduce circular locking by adding cgroup_lock -> cpu hotplug + * dependency. Do it from a subsys_initcall(). + */ +static int __init mem_cgroup_init(void) +{ + hotcpu_notifier(memcg_cpu_hotplug_callback, 0); + return 0; +} +subsys_initcall(mem_cgroup_init); + #ifdef CONFIG_MEMCG_SWAP static int __init enable_swap_account(char *s) { |