diff options
Diffstat (limited to 'Documentation/vm')
-rw-r--r-- | Documentation/vm/hugetlbpage.txt | 25 | ||||
-rw-r--r-- | Documentation/vm/soft-dirty.txt | 7 | ||||
-rw-r--r-- | Documentation/vm/split_page_table_lock | 94 | ||||
-rw-r--r-- | Documentation/vm/zswap.txt | 8 |
4 files changed, 117 insertions, 17 deletions
diff --git a/Documentation/vm/hugetlbpage.txt b/Documentation/vm/hugetlbpage.txt index 4ac359b7aa17..bdd4bb97fff7 100644 --- a/Documentation/vm/hugetlbpage.txt +++ b/Documentation/vm/hugetlbpage.txt @@ -165,6 +165,7 @@ which function as described above for the default huge page-sized case. Interaction of Task Memory Policy with Huge Page Allocation/Freeing +=================================================================== Whether huge pages are allocated and freed via the /proc interface or the /sysfs interface using the nr_hugepages_mempolicy attribute, the NUMA @@ -229,6 +230,7 @@ resulting effect on persistent huge page allocation is as follows: of huge pages over all on-lines nodes with memory. Per Node Hugepages Attributes +============================= A subset of the contents of the root huge page control directory in sysfs, described above, will be replicated under each the system device of each @@ -258,6 +260,7 @@ applied, from which node the huge page allocation will be attempted. Using Huge Pages +================ If the user applications are going to request huge pages using mmap system call, then it is required that system administrator mount a file system of @@ -296,20 +299,16 @@ calls, though the mount of filesystem will be required for using mmap calls without MAP_HUGETLB. For an example of how to use mmap with MAP_HUGETLB see map_hugetlb.c. -******************************************************************* +Examples +======== -/* - * map_hugetlb: see tools/testing/selftests/vm/map_hugetlb.c - */ +1) map_hugetlb: see tools/testing/selftests/vm/map_hugetlb.c -******************************************************************* +2) hugepage-shm: see tools/testing/selftests/vm/hugepage-shm.c -/* - * hugepage-shm: see tools/testing/selftests/vm/hugepage-shm.c - */ +3) hugepage-mmap: see tools/testing/selftests/vm/hugepage-mmap.c -******************************************************************* - -/* - * hugepage-mmap: see tools/testing/selftests/vm/hugepage-mmap.c - */ +4) The libhugetlbfs (http://libhugetlbfs.sourceforge.net) library provides a + wide range of userspace tools to help with huge page usability, environment + setup, and control. Furthermore it provides useful test cases that should be + used when modifying code to ensure no regressions are introduced. diff --git a/Documentation/vm/soft-dirty.txt b/Documentation/vm/soft-dirty.txt index 9a12a5956bc0..55684d11a1e8 100644 --- a/Documentation/vm/soft-dirty.txt +++ b/Documentation/vm/soft-dirty.txt @@ -28,6 +28,13 @@ This is so, since the pages are still mapped to physical memory, and thus all the kernel does is finds this fact out and puts both writable and soft-dirty bits on the PTE. + While in most cases tracking memory changes by #PF-s is more than enough +there is still a scenario when we can lose soft dirty bits -- a task +unmaps a previously mapped memory region and then maps a new one at exactly +the same place. When unmap is called, the kernel internally clears PTE values +including soft dirty bits. To notify user space application about such +memory region renewal the kernel always marks new memory regions (and +expanded regions) as soft dirty. This feature is actively used by the checkpoint-restore project. You can find more details about it on http://criu.org diff --git a/Documentation/vm/split_page_table_lock b/Documentation/vm/split_page_table_lock new file mode 100644 index 000000000000..7521d367f21d --- /dev/null +++ b/Documentation/vm/split_page_table_lock @@ -0,0 +1,94 @@ +Split page table lock +===================== + +Originally, mm->page_table_lock spinlock protected all page tables of the +mm_struct. But this approach leads to poor page fault scalability of +multi-threaded applications due high contention on the lock. To improve +scalability, split page table lock was introduced. + +With split page table lock we have separate per-table lock to serialize +access to the table. At the moment we use split lock for PTE and PMD +tables. Access to higher level tables protected by mm->page_table_lock. + +There are helpers to lock/unlock a table and other accessor functions: + - pte_offset_map_lock() + maps pte and takes PTE table lock, returns pointer to the taken + lock; + - pte_unmap_unlock() + unlocks and unmaps PTE table; + - pte_alloc_map_lock() + allocates PTE table if needed and take the lock, returns pointer + to taken lock or NULL if allocation failed; + - pte_lockptr() + returns pointer to PTE table lock; + - pmd_lock() + takes PMD table lock, returns pointer to taken lock; + - pmd_lockptr() + returns pointer to PMD table lock; + +Split page table lock for PTE tables is enabled compile-time if +CONFIG_SPLIT_PTLOCK_CPUS (usually 4) is less or equal to NR_CPUS. +If split lock is disabled, all tables guaded by mm->page_table_lock. + +Split page table lock for PMD tables is enabled, if it's enabled for PTE +tables and the architecture supports it (see below). + +Hugetlb and split page table lock +--------------------------------- + +Hugetlb can support several page sizes. We use split lock only for PMD +level, but not for PUD. + +Hugetlb-specific helpers: + - huge_pte_lock() + takes pmd split lock for PMD_SIZE page, mm->page_table_lock + otherwise; + - huge_pte_lockptr() + returns pointer to table lock; + +Support of split page table lock by an architecture +--------------------------------------------------- + +There's no need in special enabling of PTE split page table lock: +everything required is done by pgtable_page_ctor() and pgtable_page_dtor(), +which must be called on PTE table allocation / freeing. + +Make sure the architecture doesn't use slab allocator for page table +allocation: slab uses page->slab_cache and page->first_page for its pages. +These fields share storage with page->ptl. + +PMD split lock only makes sense if you have more than two page table +levels. + +PMD split lock enabling requires pgtable_pmd_page_ctor() call on PMD table +allocation and pgtable_pmd_page_dtor() on freeing. + +Allocation usually happens in pmd_alloc_one(), freeing in pmd_free(), but +make sure you cover all PMD table allocation / freeing paths: i.e X86_PAE +preallocate few PMDs on pgd_alloc(). + +With everything in place you can set CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK. + +NOTE: pgtable_page_ctor() and pgtable_pmd_page_ctor() can fail -- it must +be handled properly. + +page->ptl +--------- + +page->ptl is used to access split page table lock, where 'page' is struct +page of page containing the table. It shares storage with page->private +(and few other fields in union). + +To avoid increasing size of struct page and have best performance, we use a +trick: + - if spinlock_t fits into long, we use page->ptr as spinlock, so we + can avoid indirect access and save a cache line. + - if size of spinlock_t is bigger then size of long, we use page->ptl as + pointer to spinlock_t and allocate it dynamically. This allows to use + split lock with enabled DEBUG_SPINLOCK or DEBUG_LOCK_ALLOC, but costs + one more cache line for indirect access; + +The spinlock_t allocated in pgtable_page_ctor() for PTE table and in +pgtable_pmd_page_ctor() for PMD table. + +Please, never access page->ptl directly -- use appropriate helper. diff --git a/Documentation/vm/zswap.txt b/Documentation/vm/zswap.txt index 7e492d8aaeaf..00c3d31e7971 100644 --- a/Documentation/vm/zswap.txt +++ b/Documentation/vm/zswap.txt @@ -8,7 +8,7 @@ significant performance improvement if reads from the compressed cache are faster than reads from a swap device. NOTE: Zswap is a new feature as of v3.11 and interacts heavily with memory -reclaim. This interaction has not be fully explored on the large set of +reclaim. This interaction has not been fully explored on the large set of potential configurations and workloads that exist. For this reason, zswap is a work in progress and should be considered experimental. @@ -23,7 +23,7 @@ Some potential benefits: drastically reducing life-shortening writes. Zswap evicts pages from compressed cache on an LRU basis to the backing swap -device when the compressed pool reaches it size limit. This requirement had +device when the compressed pool reaches its size limit. This requirement had been identified in prior community discussions. To enabled zswap, the "enabled" attribute must be set to 1 at boot time. e.g. @@ -37,7 +37,7 @@ the backing swap device in the case that the compressed pool is full. Zswap makes use of zbud for the managing the compressed memory pool. Each allocation in zbud is not directly accessible by address. Rather, a handle is -return by the allocation routine and that handle must be mapped before being +returned by the allocation routine and that handle must be mapped before being accessed. The compressed memory pool grows on demand and shrinks as compressed pages are freed. The pool is not preallocated. @@ -56,7 +56,7 @@ in the swap_map goes to 0) the swap code calls the zswap invalidate function, via frontswap, to free the compressed entry. Zswap seeks to be simple in its policies. Sysfs attributes allow for one user -controlled policies: +controlled policy: * max_pool_percent - The maximum percentage of memory that the compressed pool can occupy. |