diff options
author | Johannes Berg <johannes.berg@intel.com> | 2025-02-10 17:09:25 +0100 |
---|---|---|
committer | Johannes Berg <johannes.berg@intel.com> | 2025-03-18 11:03:14 +0100 |
commit | d1d7f01f7cd35e16c6bcef5a0e31988b5c9980f9 (patch) | |
tree | ac2a7b2c4ccc6a9f1749c48eca519d615d125942 /arch/x86/um/shared/sysdep/faultinfo_32.h | |
parent | 5550187c4c21740942c32a9ae56f9f472a104cb4 (diff) |
um: mark rodata read-only and implement _nofault accesses
Mark read-only data actually read-only (simple mprotect), and
to be able to test it also implement _nofault accesses. This
works by setting up a new "segv_continue" pointer in current,
and then when we hit a segfault we change the signal return
context so that we continue at that address. The code using
this sets it up so that it jumps to a label and then aborts
the access that way, returning -EFAULT.
It's possible to optimize the ___backtrack_faulted() thing by
using asm goto (compiler version dependent) and/or gcc's (not
sure if clang has it) &&label extension, but at least in one
attempt I made the && caused the compiler to not load -EFAULT
into the register in case of jumping to the &&label from the
fault handler. So leave it like this for now.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Co-developed-by: Benjamin Berg <benjamin.berg@intel.com>
Signed-off-by: Benjamin Berg <benjamin.berg@intel.com>
Link: https://patch.msgid.link/20250210160926.420133-2-benjamin@sipsolutions.net
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Diffstat (limited to 'arch/x86/um/shared/sysdep/faultinfo_32.h')
-rw-r--r-- | arch/x86/um/shared/sysdep/faultinfo_32.h | 12 |
1 files changed, 12 insertions, 0 deletions
diff --git a/arch/x86/um/shared/sysdep/faultinfo_32.h b/arch/x86/um/shared/sysdep/faultinfo_32.h index b6f2437ec29c..ab5c8e47049c 100644 --- a/arch/x86/um/shared/sysdep/faultinfo_32.h +++ b/arch/x86/um/shared/sysdep/faultinfo_32.h @@ -29,4 +29,16 @@ struct faultinfo { #define PTRACE_FULL_FAULTINFO 0 +#define ___backtrack_faulted(_faulted) \ + asm volatile ( \ + "mov $0, %0\n" \ + "movl $__get_kernel_nofault_faulted_%=,%1\n" \ + "jmp _end_%=\n" \ + "__get_kernel_nofault_faulted_%=:\n" \ + "mov $1, %0;" \ + "_end_%=:" \ + : "=r" (_faulted), \ + "=m" (current->thread.segv_continue) :: \ + ) + #endif |