diff options
author | Andy Lutomirski <luto@amacapital.net> | 2014-09-05 15:13:56 -0700 |
---|---|---|
committer | H. Peter Anvin <hpa@linux.intel.com> | 2014-09-08 14:14:12 -0700 |
commit | 1dcf74f6edfc3a9acd84d83d8865dd9e2a3b1d1e (patch) | |
tree | 21030b6f0394f5b82cd17b96fd0008375b3f254b /kernel/seccomp.c | |
parent | 54eea9957f5763dd1a2555d7e4cb53b4dd389cc6 (diff) |
x86_64, entry: Use split-phase syscall_trace_enter for 64-bit syscalls
On KVM on my box, this reduces the overhead from an always-accept
seccomp filter from ~130ns to ~17ns. Most of that comes from
avoiding IRET on every syscall when seccomp is enabled.
In extremely approximate hacked-up benchmarking, just bypassing IRET
saves about 80ns, so there's another 43ns of savings here from
simplifying the seccomp path.
The diffstat is also rather nice :)
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Link: http://lkml.kernel.org/r/a3dbd267ee990110478d349f78cccfdac5497a84.1409954077.git.luto@amacapital.net
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Diffstat (limited to 'kernel/seccomp.c')
0 files changed, 0 insertions, 0 deletions