summaryrefslogtreecommitdiff
path: root/fs/proc
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2015-11-01 16:43:24 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2015-11-01 16:43:24 -0800
commit2e002662973fd8d67d5a760776a5d3ea3d3399a9 (patch)
tree4c6feb1e7a10d602ef7c3f9e017cad48e52cc8a1 /fs/proc
parent6a13feb9c82803e2b815eca72fa7a9f5561d7861 (diff)
parentfc90888d07b8e17eec49c04bdb26344fdea96c3b (diff)
Merge branch 'fs-file-descriptor-optimization'
Merge file descriptor allocation speedup. Eric Dumazet has a test-case for a fairly common network deamon load pattern: openign and closing a lot of sockets that each have very little work done on them. It turns out that in that case, the cost of just finding the correct file descriptor number can be a dominating factor. We've long had a trivial optimization for allocating file descriptors sequentially, but that optimization ends up being not very effective when other file descriptors are being closed concurrently, and the fd patterns are not some simple FIFO pattern. In such cases we ended up spending a lot of time just scanning the bitmap of open file descriptors in order to find the next file descriptor number to open. This trivial patch-series mitigates that by simply introducing a second-level bitmap of which words in the first bitmap are already fully allocated. That cuts down the cost of scanning by an order of magnitude in some pathological (but realistic) cases. The second patch is an even more trivial patch to avoid unnecessarily dirtying the cacheline for the close-on-exec bit array that normally ends up being all empty. * fs-file-descriptor-optimization: vfs: conditionally clear close-on-exec flag vfs: Fix pathological performance case for __alloc_fd()
Diffstat (limited to 'fs/proc')
0 files changed, 0 insertions, 0 deletions