diff options
| author | Vlastimil Babka <vbabka@suse.cz> | 2025-11-25 14:38:41 +0100 |
|---|---|---|
| committer | Vlastimil Babka <vbabka@suse.cz> | 2025-11-25 14:38:41 +0100 |
| commit | a8ec08bf32595ea4b109e3c7f679d4457d1c58c0 (patch) | |
| tree | 42af7f0a526f1df607465a3d29b52573e828310e /mm/page_alloc.c | |
| parent | ed80cc758b784a1ed297f9130625de217a904ba5 (diff) | |
| parent | 48233291461b0539d798d00aaacccf1b3b163102 (diff) | |
Merge branch 'slab/for-6.19/mempool_alloc_bulk' into slab/for-next
Merges series "mempool_alloc_bulk and various mempool improvements v3"
from Christoph Hellwig.
From the cover letter [1]:
This series adds a bulk version of mempool_alloc that makes allocating
multiple objects deadlock safe.
The initial users is the blk-crypto-fallback code:
https://lore.kernel.org/linux-block/20251031093517.1603379-1-hch@lst.de/
with which v1 was posted, but I also have a few other users in mind.
Link: https://lore.kernel.org/all/20251113084022.1255121-1-hch@lst.de/ [1]
Diffstat (limited to 'mm/page_alloc.c')
| -rw-r--r-- | mm/page_alloc.c | 15 |
1 files changed, 10 insertions, 5 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 600d9e981c23..b3d37169a553 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4982,13 +4982,18 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, * @nr_pages: The number of pages desired in the array * @page_array: Array to store the pages * - * This is a batched version of the page allocator that attempts to - * allocate nr_pages quickly. Pages are added to the page_array. + * This is a batched version of the page allocator that attempts to allocate + * @nr_pages quickly. Pages are added to @page_array. * - * Note that only NULL elements are populated with pages and nr_pages - * is the maximum number of pages that will be stored in the array. + * Note that only the elements in @page_array that were cleared to %NULL on + * entry are populated with newly allocated pages. @nr_pages is the maximum + * number of pages that will be stored in the array. * - * Returns the number of pages in the array. + * Returns the number of pages in @page_array, including ones already + * allocated on entry. This can be less than the number requested in @nr_pages, + * but all empty slots are filled from the beginning. I.e., if all slots in + * @page_array were set to %NULL on entry, the slots from 0 to the return value + * - 1 will be filled. */ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, |
