diff options
author | Yuanhan Liu <yuanhan.liu@linux.intel.com> | 2013-02-04 14:28:48 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-02-05 20:38:48 +1100 |
commit | 631b0cfdbd801ceae8762e8d287f15da26792ebe (patch) | |
tree | b267575bb7eaf7b5ce21290cfc5e9a86ef7da952 | |
parent | 249d9d9d6b7bfa3cf96c17d368eba2e32122aad1 (diff) |
mm: fix wrong comments about anon_vma lock
We use rwsem since commit 5a505085f043 ("mm/rmap: Convert the struct
anon_vma::mutex to an rwsem"). And most of comments are converted to
the new rwsem lock; while just 2 more missed from:
$ git grep 'anon_vma->mutex'
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | include/linux/mmu_notifier.h | 2 | ||||
-rw-r--r-- | mm/mmap.c | 2 |
2 files changed, 2 insertions, 2 deletions
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index bc823c4c028b..deca87452528 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -151,7 +151,7 @@ struct mmu_notifier_ops { * Therefore notifier chains can only be traversed when either * * 1. mmap_sem is held. - * 2. One of the reverse map locks is held (i_mmap_mutex or anon_vma->mutex). + * 2. One of the reverse map locks is held (i_mmap_mutex or anon_vma->rwsem). * 3. No other concurrent thread can access the list (release) */ struct mmu_notifier { diff --git a/mm/mmap.c b/mm/mmap.c index 35730ee9d515..d1e4124f3d0e 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2943,7 +2943,7 @@ static void vm_lock_mapping(struct mm_struct *mm, struct address_space *mapping) * vma in this mm is backed by the same anon_vma or address_space. * * We can take all the locks in random order because the VM code - * taking i_mmap_mutex or anon_vma->mutex outside the mmap_sem never + * taking i_mmap_mutex or anon_vma->rwsem outside the mmap_sem never * takes more than one of them in a row. Secondly we're protected * against a concurrent mm_take_all_locks() by the mm_all_locks_mutex. * |