diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2019-06-02 08:51:30 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-06-02 08:51:30 -0700 |
| commit | 7b3064f0e8deb55b8655dd8d36d9d1e8fb62b71b (patch) | |
| tree | a991e83f2711bc4ee1ca7daa868fc269637b6d45 /lib | |
| parent | 3ab4436f688c2d2f221793953cd05435ca84261c (diff) | |
| parent | e577c8b64d58fe307ea4d5149d31615df2d90861 (diff) | |
Merge branch 'akpm' (patches from Andrew)
Merge misc fixes from Andrew Morton:
"Various fixes and followups"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
mm, compaction: make sure we isolate a valid PFN
include/linux/generic-radix-tree.h: fix kerneldoc comment
kernel/signal.c: trace_signal_deliver when signal_group_exit
drivers/iommu/intel-iommu.c: fix variable 'iommu' set but not used
spdxcheck.py: fix directory structures
kasan: initialize tag to 0xff in __kasan_kmalloc
z3fold: fix sheduling while atomic
scripts/gdb: fix invocation when CONFIG_COMMON_CLK is not set
mm/gup: continue VM_FAULT_RETRY processing even for pre-faults
ocfs2: fix error path kobject memory leak
memcg: make it work on sparse non-0-node systems
mm, memcg: consider subtrees in memory.events
prctl_set_mm: downgrade mmap_sem to read lock
prctl_set_mm: refactor checks from validate_prctl_map
kernel/fork.c: make max_threads symbol static
arch/arm/boot/compressed/decompress.c: fix build error due to lz4 changes
arch/parisc/configs/c8000_defconfig: remove obsoleted CONFIG_DEBUG_SLAB_LEAK
mm/vmalloc.c: fix typo in comment
lib/sort.c: fix kernel-doc notation warnings
mm: fix Documentation/vm/hmm.rst Sphinx warnings
Diffstat (limited to 'lib')
| -rw-r--r-- | lib/sort.c | 15 |
1 files changed, 9 insertions, 6 deletions
diff --git a/lib/sort.c b/lib/sort.c index 50855ea8c262..cf408aec3733 100644 --- a/lib/sort.c +++ b/lib/sort.c @@ -43,8 +43,9 @@ static bool is_aligned(const void *base, size_t size, unsigned char align) /** * swap_words_32 - swap two elements in 32-bit chunks - * @a, @b: pointers to the elements - * @size: element size (must be a multiple of 4) + * @a: pointer to the first element to swap + * @b: pointer to the second element to swap + * @n: element size (must be a multiple of 4) * * Exchange the two objects in memory. This exploits base+index addressing, * which basically all CPUs have, to minimize loop overhead computations. @@ -65,8 +66,9 @@ static void swap_words_32(void *a, void *b, size_t n) /** * swap_words_64 - swap two elements in 64-bit chunks - * @a, @b: pointers to the elements - * @size: element size (must be a multiple of 8) + * @a: pointer to the first element to swap + * @b: pointer to the second element to swap + * @n: element size (must be a multiple of 8) * * Exchange the two objects in memory. This exploits base+index * addressing, which basically all CPUs have, to minimize loop overhead @@ -100,8 +102,9 @@ static void swap_words_64(void *a, void *b, size_t n) /** * swap_bytes - swap two elements a byte at a time - * @a, @b: pointers to the elements - * @size: element size + * @a: pointer to the first element to swap + * @b: pointer to the second element to swap + * @n: element size * * This is the fallback if alignment doesn't allow using larger chunks. */ |
