diff options
author | David Hildenbrand <david@redhat.com> | 2025-07-04 12:25:09 +0200 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2025-07-13 16:38:28 -0700 |
commit | 22d103aef090dc688a88881fb955376dec1228d5 (patch) | |
tree | f94b9fa5b3f499ad6f95d5a5f978b5c799764247 /mm | |
parent | 34727dee04994c8ceb1bb8a927af0a88e52e103c (diff) |
mm/migration: remove PageMovable()
Previously, if __ClearPageMovable() were invoked on a page, this would
cause __PageMovable() to return false, but due to the continued existence
of page movable ops, PageMovable() would have returned true.
With __ClearPageMovable() gone, the two are exactly equivalent.
So we can replace PageMovable() checks by __PageMovable(). In fact,
__PageMovable() cannot change until a page is freed, so we can turn some
PageMovable() into sanity checks for __PageMovable().
Link: https://lkml.kernel.org/r/20250704102524.326966-16-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Brendan Jackman <jackmanb@google.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Chengming Zhou <chengming.zhou@linux.dev>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Eugenio Pé rez <eperezma@redhat.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Gregory Price <gourry@gourry.net>
Cc: "Huang, Ying" <ying.huang@linux.alibaba.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Jerrin Shaji George <jerrin.shaji-george@broadcom.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Mathew Brost <matthew.brost@intel.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Naoya Horiguchi <nao.horiguchi@gmail.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Peter Xu <peterx@redhat.com>
Cc: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Cc: xu xin <xu.xin16@zte.com.cn>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/compaction.c | 15 | ||||
-rw-r--r-- | mm/migrate.c | 18 |
2 files changed, 10 insertions, 23 deletions
diff --git a/mm/compaction.c b/mm/compaction.c index 889ec696ba96..5c3737301701 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -114,21 +114,6 @@ static unsigned long release_free_list(struct list_head *freepages) } #ifdef CONFIG_COMPACTION -bool PageMovable(struct page *page) -{ - const struct movable_operations *mops; - - VM_BUG_ON_PAGE(!PageLocked(page), page); - if (!__PageMovable(page)) - return false; - - mops = page_movable_ops(page); - if (mops) - return true; - - return false; -} - void __SetPageMovable(struct page *page, const struct movable_operations *mops) { VM_BUG_ON_PAGE(!PageLocked(page), page); diff --git a/mm/migrate.c b/mm/migrate.c index 61e98ed46f13..1f07c8f1fb74 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -87,9 +87,12 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode) goto out; /* - * Check movable flag before taking the page lock because + * Check for movable_ops pages before taking the page lock because * we use non-atomic bitops on newly allocated page flags so * unconditionally grabbing the lock ruins page's owner side. + * + * Note that once a page has movable_ops, it will stay that way + * until the page was freed. */ if (unlikely(!__PageMovable(page))) goto out_putfolio; @@ -108,7 +111,8 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode) if (unlikely(!folio_trylock(folio))) goto out_putfolio; - if (!PageMovable(page) || PageIsolated(page)) + VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page); + if (PageIsolated(page)) goto out_no_isolated; mops = page_movable_ops(page); @@ -149,11 +153,10 @@ static void putback_movable_ops_page(struct page *page) */ struct folio *folio = page_folio(page); + VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page); VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page); folio_lock(folio); - /* If the page was released by it's owner, there is nothing to do. */ - if (PageMovable(page)) - page_movable_ops(page)->putback_page(page); + page_movable_ops(page)->putback_page(page); ClearPageIsolated(page); folio_unlock(folio); folio_put(folio); @@ -191,10 +194,9 @@ static int migrate_movable_ops_page(struct page *dst, struct page *src, { int rc = MIGRATEPAGE_SUCCESS; + VM_WARN_ON_ONCE_PAGE(!__PageMovable(src), src); VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src); - /* If the page was released by it's owner, there is nothing to do. */ - if (PageMovable(src)) - rc = page_movable_ops(src)->migrate_page(dst, src, mode); + rc = page_movable_ops(src)->migrate_page(dst, src, mode); if (rc == MIGRATEPAGE_SUCCESS) ClearPageIsolated(src); return rc; |