<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/arch/arm/include, branch v4.9.89</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>arm: KVM: Survive unknown traps from guests</title>
<updated>2017-12-14T08:28:19+00:00</updated>
<author>
<name>Mark Rutland</name>
<email>mark.rutland@arm.com</email>
</author>
<published>2017-02-20T12:30:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=5e366aaec185c83157a126dad227a75e59899c62'/>
<id>5e366aaec185c83157a126dad227a75e59899c62</id>
<content type='text'>
[ Upstream commit f050fe7a9164945dd1c28be05bf00e8cfb082ccf ]

Currently we BUG() if we see a HSR.EC value we don't recognise. As
configurable disables/enables are added to the architecture (controlled
by RES1/RES0 bits respectively), with associated synchronous exceptions,
it may be possible for a guest to trigger exceptions with classes that
we don't recognise.

While we can't service these exceptions in a manner useful to the guest,
we can avoid bringing down the host. Per ARM DDI 0406C.c, all currently
unallocated HSR EC encodings are reserved, and per ARM DDI
0487A.k_iss10775, page G6-4395, EC values within the range 0x00 - 0x2c
are reserved for future use with synchronous exceptions, and EC values
within the range 0x2d - 0x3f may be used for either synchronous or
asynchronous exceptions.

The patch makes KVM handle any unknown EC by injecting an UNDEFINED
exception into the guest, with a corresponding (ratelimited) warning in
the host dmesg. We could later improve on this with with a new (opt-in)
exit to the host userspace.

Cc: Dave Martin &lt;dave.martin@arm.com&gt;
Cc: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Reviewed-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit f050fe7a9164945dd1c28be05bf00e8cfb082ccf ]

Currently we BUG() if we see a HSR.EC value we don't recognise. As
configurable disables/enables are added to the architecture (controlled
by RES1/RES0 bits respectively), with associated synchronous exceptions,
it may be possible for a guest to trigger exceptions with classes that
we don't recognise.

While we can't service these exceptions in a manner useful to the guest,
we can avoid bringing down the host. Per ARM DDI 0406C.c, all currently
unallocated HSR EC encodings are reserved, and per ARM DDI
0487A.k_iss10775, page G6-4395, EC values within the range 0x00 - 0x2c
are reserved for future use with synchronous exceptions, and EC values
within the range 0x2d - 0x3f may be used for either synchronous or
asynchronous exceptions.

The patch makes KVM handle any unknown EC by injecting an UNDEFINED
exception into the guest, with a corresponding (ratelimited) warning in
the host dmesg. We could later improve on this with with a new (opt-in)
exit to the host userspace.

Cc: Dave Martin &lt;dave.martin@arm.com&gt;
Cc: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Reviewed-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: 8657/1: uaccess: consistently check object sizes</title>
<updated>2017-12-14T08:28:16+00:00</updated>
<author>
<name>Kees Cook</name>
<email>keescook@chromium.org</email>
</author>
<published>2017-02-16T00:43:58+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=5d4d0a95437efc94c3b10a2ea04a81e5e3863dfb'/>
<id>5d4d0a95437efc94c3b10a2ea04a81e5e3863dfb</id>
<content type='text'>
[ Upstream commit 32b143637e8180f5d5cea54320c769210dea4f19 ]

In commit 76624175dcae ("arm64: uaccess: consistently check object sizes"),
the object size checks are moved outside the access_ok() so that bad
destinations are detected before hitting the "memset(dest, 0, size)" in the
copy_from_user() failure path.

This makes the same change for arm, with attention given to possibly
extracting the uaccess routines into a common header file for all
architectures in the future.

Suggested-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Signed-off-by: Kees Cook &lt;keescook@chromium.org&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit 32b143637e8180f5d5cea54320c769210dea4f19 ]

In commit 76624175dcae ("arm64: uaccess: consistently check object sizes"),
the object size checks are moved outside the access_ok() so that bad
destinations are detected before hitting the "memset(dest, 0, size)" in the
copy_from_user() failure path.

This makes the same change for arm, with attention given to possibly
extracting the uaccess routines into a common header file for all
architectures in the future.

Suggested-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Signed-off-by: Kees Cook &lt;keescook@chromium.org&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: BUG if jumping to usermode address in kernel mode</title>
<updated>2017-12-14T08:28:16+00:00</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@armlinux.org.uk</email>
</author>
<published>2017-11-24T23:49:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=6192f870abf25d6bb83cb5cb85b072b5dcc40ba4'/>
<id>6192f870abf25d6bb83cb5cb85b072b5dcc40ba4</id>
<content type='text'>
commit 8bafae202c82dc257f649ea3c275a0f35ee15113 upstream.

Detect if we are returning to usermode via the normal kernel exit paths
but the saved PSR value indicates that we are in kernel mode.  This
could occur due to corrupted stack state, which has been observed with
"ftracetest".

This ensures that we catch the problem case before we get to user code.

Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
Cc: Alex Shi &lt;alex.shi@linaro.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 8bafae202c82dc257f649ea3c275a0f35ee15113 upstream.

Detect if we are returning to usermode via the normal kernel exit paths
but the saved PSR value indicates that we are in kernel mode.  This
could occur due to corrupted stack state, which has been observed with
"ftracetest".

This ensures that we catch the problem case before we get to user code.

Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
Cc: Alex Shi &lt;alex.shi@linaro.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>arm: KVM: Fix VTTBR_BADDR_MASK BUG_ON off-by-one</title>
<updated>2017-12-14T08:28:14+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2017-11-16T17:58:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=6ed459712e26acd638b85c4f06dd355960e741c2'/>
<id>6ed459712e26acd638b85c4f06dd355960e741c2</id>
<content type='text'>
commit 5553b142be11e794ebc0805950b2e8313f93d718 upstream.

VTTBR_BADDR_MASK is used to sanity check the size and alignment of the
VTTBR address. It seems to currently be off by one, thereby only
allowing up to 39-bit addresses (instead of 40-bit) and also
insufficiently checking the alignment. This patch fixes it.

This patch is the 32bit pendent of Kristina's arm64 fix, and
she deserves the actual kudos for pinpointing that one.

Fixes: f7ed45be3ba52 ("KVM: ARM: World-switch implementation")
Reported-by: Kristina Martsenko &lt;kristina.martsenko@arm.com&gt;
Reviewed-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 5553b142be11e794ebc0805950b2e8313f93d718 upstream.

VTTBR_BADDR_MASK is used to sanity check the size and alignment of the
VTTBR address. It seems to currently be off by one, thereby only
allowing up to 39-bit addresses (instead of 40-bit) and also
insufficiently checking the alignment. This patch fixes it.

This patch is the 32bit pendent of Kristina's arm64 fix, and
she deserves the actual kudos for pinpointing that one.

Fixes: f7ed45be3ba52 ("KVM: ARM: World-switch implementation")
Reported-by: Kristina Martsenko &lt;kristina.martsenko@arm.com&gt;
Reviewed-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: 8715/1: add a private asm/unaligned.h</title>
<updated>2017-11-08T09:08:32+00:00</updated>
<author>
<name>Arnd Bergmann</name>
<email>arnd@arndb.de</email>
</author>
<published>2017-10-20T20:17:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=b3937f55c725894d03ca6592a8dd29662e85251d'/>
<id>b3937f55c725894d03ca6592a8dd29662e85251d</id>
<content type='text'>
commit 1cce91dfc8f7990ca3aea896bfb148f240b12860 upstream.

The asm-generic/unaligned.h header provides two different implementations
for accessing unaligned variables: the access_ok.h version used when
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set pretends that all pointers
are in fact aligned, while the le_struct.h version convinces gcc that the
alignment of a pointer is '1', to make it issue the correct load/store
instructions depending on the architecture flags.

On ARMv5 and older, we always use the second version, to let the compiler
use byte accesses. On ARMv6 and newer, we currently use the access_ok.h
version, so the compiler can use any instruction including stm/ldm and
ldrd/strd that will cause an alignment trap. This trap can significantly
impact performance when we have to do a lot of fixups and, worse, has
led to crashes in the LZ4 decompressor code that does not have a trap
handler.

This adds an ARM specific version of asm/unaligned.h that uses the
le_struct.h/be_struct.h implementation unconditionally. This should lead
to essentially the same code on ARMv6+ as before, with the exception of
using regular load/store instructions instead of the trapping instructions
multi-register variants.

The crash in the LZ4 decompressor code was probably introduced by the
patch replacing the LZ4 implementation, commit 4e1a33b105dd ("lib: update
LZ4 compressor module"), so linux-4.11 and higher would be affected most.
However, we probably want to have this backported to all older stable
kernels as well, to help with the performance issues.

There are two follow-ups that I think we should also work on, but not
backport to stable kernels, first to change the asm-generic version of
the header to remove the ARM special case, and second to review all
other uses of CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to see if they
might be affected by the same problem on ARM.

Signed-off-by: Arnd Bergmann &lt;arnd@arndb.de&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 1cce91dfc8f7990ca3aea896bfb148f240b12860 upstream.

The asm-generic/unaligned.h header provides two different implementations
for accessing unaligned variables: the access_ok.h version used when
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set pretends that all pointers
are in fact aligned, while the le_struct.h version convinces gcc that the
alignment of a pointer is '1', to make it issue the correct load/store
instructions depending on the architecture flags.

On ARMv5 and older, we always use the second version, to let the compiler
use byte accesses. On ARMv6 and newer, we currently use the access_ok.h
version, so the compiler can use any instruction including stm/ldm and
ldrd/strd that will cause an alignment trap. This trap can significantly
impact performance when we have to do a lot of fixups and, worse, has
led to crashes in the LZ4 decompressor code that does not have a trap
handler.

This adds an ARM specific version of asm/unaligned.h that uses the
le_struct.h/be_struct.h implementation unconditionally. This should lead
to essentially the same code on ARMv6+ as before, with the exception of
using regular load/store instructions instead of the trapping instructions
multi-register variants.

The crash in the LZ4 decompressor code was probably introduced by the
patch replacing the LZ4 implementation, commit 4e1a33b105dd ("lib: update
LZ4 compressor module"), so linux-4.11 and higher would be affected most.
However, we probably want to have this backported to all older stable
kernels as well, to help with the performance issues.

There are two follow-ups that I think we should also work on, but not
backport to stable kernels, first to change the asm-generic version of
the header to remove the ARM special case, and second to review all
other uses of CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to see if they
might be affected by the same problem on ARM.

Signed-off-by: Arnd Bergmann &lt;arnd@arndb.de&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: 8632/1: ftrace: fix syscall name matching</title>
<updated>2017-08-11T15:49:36+00:00</updated>
<author>
<name>Rabin Vincent</name>
<email>rabinv@axis.com</email>
</author>
<published>2016-11-23T12:02:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=bebbe845cb50ab4b499bff2d0bbe2fef09de86af'/>
<id>bebbe845cb50ab4b499bff2d0bbe2fef09de86af</id>
<content type='text'>
[ Upstream commit 270c8cf1cacc69cb8d99dea812f06067a45e4609 ]

ARM has a few system calls (most notably mmap) for which the names of
the functions which are referenced in the syscall table do not match the
names of the syscall tracepoints.  As a consequence of this, these
tracepoints are not made available.  Implement
arch_syscall_match_sym_name to fix this and allow tracing even these
system calls.

Signed-off-by: Rabin Vincent &lt;rabinv@axis.com&gt;
Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit 270c8cf1cacc69cb8d99dea812f06067a45e4609 ]

ARM has a few system calls (most notably mmap) for which the names of
the functions which are referenced in the syscall table do not match the
names of the syscall tracepoints.  As a consequence of this, these
tracepoints are not made available.  Implement
arch_syscall_match_sym_name to fix this and allow tracing even these
system calls.

Signed-off-by: Rabin Vincent &lt;rabinv@axis.com&gt;
Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm: move ELF_ET_DYN_BASE to 4MB</title>
<updated>2017-07-21T05:42:21+00:00</updated>
<author>
<name>Kees Cook</name>
<email>keescook@chromium.org</email>
</author>
<published>2017-07-10T22:52:40+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=abb79a56898e044715d8efd07bc1902d27a94870'/>
<id>abb79a56898e044715d8efd07bc1902d27a94870</id>
<content type='text'>
commit 6a9af90a3bcde217a1c053e135f5f43e5d5fafbd upstream.

Now that explicitly executed loaders are loaded in the mmap region, we
have more freedom to decide where we position PIE binaries in the
address space to avoid possible collisions with mmap or stack regions.

4MB is chosen here mainly to have parity with x86, where this is the
traditional minimum load location, likely to avoid historically
requiring a 4MB page table entry when only a portion of the first 4MB
would be used (since the NULL address is avoided).

For ARM the position could be 0x8000, the standard ET_EXEC load address,
but that is needlessly close to the NULL address, and anyone running PIE
on 32-bit ARM will have an MMU, so the tight mapping is not needed.

Link: http://lkml.kernel.org/r/1498154792-49952-2-git-send-email-keescook@chromium.org
Signed-off-by: Kees Cook &lt;keescook@chromium.org&gt;
Cc: Russell King &lt;linux@armlinux.org.uk&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Benjamin Herrenschmidt &lt;benh@kernel.crashing.org&gt;
Cc: Paul Mackerras &lt;paulus@samba.org&gt;
Cc: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Cc: Martin Schwidefsky &lt;schwidefsky@de.ibm.com&gt;
Cc: Heiko Carstens &lt;heiko.carstens@de.ibm.com&gt;
Cc: James Hogan &lt;james.hogan@imgtec.com&gt;
Cc: Pratyush Anand &lt;panand@redhat.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Cc: Alexander Viro &lt;viro@zeniv.linux.org.uk&gt;
Cc: Andy Lutomirski &lt;luto@amacapital.net&gt;
Cc: Daniel Micay &lt;danielmicay@gmail.com&gt;
Cc: Dmitry Safonov &lt;dsafonov@virtuozzo.com&gt;
Cc: Grzegorz Andrejczuk &lt;grzegorz.andrejczuk@intel.com&gt;
Cc: Kees Cook &lt;keescook@chromium.org&gt;
Cc: Masahiro Yamada &lt;yamada.masahiro@socionext.com&gt;
Cc: Qualys Security Advisory &lt;qsa@qualys.com&gt;
Cc: Rik van Riel &lt;riel@redhat.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 6a9af90a3bcde217a1c053e135f5f43e5d5fafbd upstream.

Now that explicitly executed loaders are loaded in the mmap region, we
have more freedom to decide where we position PIE binaries in the
address space to avoid possible collisions with mmap or stack regions.

4MB is chosen here mainly to have parity with x86, where this is the
traditional minimum load location, likely to avoid historically
requiring a 4MB page table entry when only a portion of the first 4MB
would be used (since the NULL address is avoided).

For ARM the position could be 0x8000, the standard ET_EXEC load address,
but that is needlessly close to the NULL address, and anyone running PIE
on 32-bit ARM will have an MMU, so the tight mapping is not needed.

Link: http://lkml.kernel.org/r/1498154792-49952-2-git-send-email-keescook@chromium.org
Signed-off-by: Kees Cook &lt;keescook@chromium.org&gt;
Cc: Russell King &lt;linux@armlinux.org.uk&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Benjamin Herrenschmidt &lt;benh@kernel.crashing.org&gt;
Cc: Paul Mackerras &lt;paulus@samba.org&gt;
Cc: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Cc: Martin Schwidefsky &lt;schwidefsky@de.ibm.com&gt;
Cc: Heiko Carstens &lt;heiko.carstens@de.ibm.com&gt;
Cc: James Hogan &lt;james.hogan@imgtec.com&gt;
Cc: Pratyush Anand &lt;panand@redhat.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Cc: Alexander Viro &lt;viro@zeniv.linux.org.uk&gt;
Cc: Andy Lutomirski &lt;luto@amacapital.net&gt;
Cc: Daniel Micay &lt;danielmicay@gmail.com&gt;
Cc: Dmitry Safonov &lt;dsafonov@virtuozzo.com&gt;
Cc: Grzegorz Andrejczuk &lt;grzegorz.andrejczuk@intel.com&gt;
Cc: Kees Cook &lt;keescook@chromium.org&gt;
Cc: Masahiro Yamada &lt;yamada.masahiro@socionext.com&gt;
Cc: Qualys Security Advisory &lt;qsa@qualys.com&gt;
Cc: Rik van Riel &lt;riel@redhat.com&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: 8662/1: module: split core and init PLT sections</title>
<updated>2017-05-25T13:44:44+00:00</updated>
<author>
<name>Ard Biesheuvel</name>
<email>ard.biesheuvel@linaro.org</email>
</author>
<published>2017-02-22T18:40:12+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=3304f5a1cb874c63fcc48f9021320510a73c03f9'/>
<id>3304f5a1cb874c63fcc48f9021320510a73c03f9</id>
<content type='text'>
commit b7ede5a1f5905ac394cc8e61712a13e3c5cb7b8f upstream.

Since commit 35fa91eed817 ("ARM: kernel: merge core and init PLTs"),
the ARM module PLT code allocates all PLT entries in a single core
section, since the overhead of having a separate init PLT section is
not justified by the small number of PLT entries usually required for
init code.

However, the core and init module regions are allocated independently,
and there is a corner case where the core region may be allocated from
the VMALLOC region if the dedicated module region is exhausted, but the
init region, being much smaller, can still be allocated from the module
region. This puts the PLT entries out of reach of the relocated branch
instructions, defeating the whole purpose of PLTs.

So split the core and init PLT regions, and name the latter ".init.plt"
so it gets allocated along with (and sufficiently close to) the .init
sections that it serves. Also, given that init PLT entries may need to
be emitted for branches that target the core module, modify the logic
that disregards defined symbols to only disregard symbols that are
defined in the same section.

Fixes: 35fa91eed817 ("ARM: kernel: merge core and init PLTs")
Reported-by: Angus Clark &lt;angus@angusclark.org&gt;
Tested-by: Angus Clark &lt;angus@angusclark.org&gt;
Signed-off-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit b7ede5a1f5905ac394cc8e61712a13e3c5cb7b8f upstream.

Since commit 35fa91eed817 ("ARM: kernel: merge core and init PLTs"),
the ARM module PLT code allocates all PLT entries in a single core
section, since the overhead of having a separate init PLT section is
not justified by the small number of PLT entries usually required for
init code.

However, the core and init module regions are allocated independently,
and there is a corner case where the core region may be allocated from
the VMALLOC region if the dedicated module region is exhausted, but the
init region, being much smaller, can still be allocated from the module
region. This puts the PLT entries out of reach of the relocated branch
instructions, defeating the whole purpose of PLTs.

So split the core and init PLT regions, and name the latter ".init.plt"
so it gets allocated along with (and sufficiently close to) the .init
sections that it serves. Also, given that init PLT entries may need to
be emitted for branches that target the core module, modify the logic
that disregards defined symbols to only disregard symbols that are
defined in the same section.

Fixes: 35fa91eed817 ("ARM: kernel: merge core and init PLTs")
Reported-by: Angus Clark &lt;angus@angusclark.org&gt;
Tested-by: Angus Clark &lt;angus@angusclark.org&gt;
Signed-off-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: arm: plug potential guest hardware debug leakage</title>
<updated>2017-05-25T13:44:44+00:00</updated>
<author>
<name>Zhichao Huang</name>
<email>zhichao.huang@linaro.org</email>
</author>
<published>2017-05-11T12:46:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=ee773459557d2242ddf6a2289e4615b4eb0668ae'/>
<id>ee773459557d2242ddf6a2289e4615b4eb0668ae</id>
<content type='text'>
commit 661e6b02b5aa82db31897f36e96324b77450fd7a upstream.

Hardware debugging in guests is not intercepted currently, it means
that a malicious guest can bring down the entire machine by writing
to the debug registers.

This patch enable trapping of all debug registers, preventing the
guests to access the debug registers. This includes access to the
debug mode(DBGDSCR) in the guest world all the time which could
otherwise mess with the host state. Reads return 0 and writes are
ignored (RAZ_WI).

The result is the guest cannot detect any working hardware based debug
support. As debug exceptions are still routed to the guest normal
debug using software based breakpoints still works.

To support debugging using hardware registers we need to implement a
debug register aware world switch as well as special trapping for
registers that may affect the host state.

Signed-off-by: Zhichao Huang &lt;zhichao.huang@linaro.org&gt;
Signed-off-by: Alex Bennée &lt;alex.bennee@linaro.org&gt;
Reviewed-by: Christoffer Dall &lt;cdall@linaro.org&gt;
Signed-off-by: Christoffer Dall &lt;cdall@linaro.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 661e6b02b5aa82db31897f36e96324b77450fd7a upstream.

Hardware debugging in guests is not intercepted currently, it means
that a malicious guest can bring down the entire machine by writing
to the debug registers.

This patch enable trapping of all debug registers, preventing the
guests to access the debug registers. This includes access to the
debug mode(DBGDSCR) in the guest world all the time which could
otherwise mess with the host state. Reads return 0 and writes are
ignored (RAZ_WI).

The result is the guest cannot detect any working hardware based debug
support. As debug exceptions are still routed to the guest normal
debug using software based breakpoints still works.

To support debugging using hardware registers we need to implement a
debug register aware world switch as well as special trapping for
registers that may affect the host state.

Signed-off-by: Zhichao Huang &lt;zhichao.huang@linaro.org&gt;
Signed-off-by: Alex Bennée &lt;alex.bennee@linaro.org&gt;
Reviewed-by: Christoffer Dall &lt;cdall@linaro.org&gt;
Signed-off-by: Christoffer Dall &lt;cdall@linaro.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>arm/arm64: KVM: Enforce unconditional flush to PoC when mapping to stage-2</title>
<updated>2017-03-12T05:41:47+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2017-01-25T12:29:59+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=ac4c8fcf5ebceb0a2b6342707e4835a13f5bcf22'/>
<id>ac4c8fcf5ebceb0a2b6342707e4835a13f5bcf22</id>
<content type='text'>
commit 8f36ebaf21fdae99c091c67e8b6fab33969f2667 upstream.

When we fault in a page, we flush it to the PoC (Point of Coherency)
if the faulting vcpu has its own caches off, so that it can observe
the page we just brought it.

But if the vcpu has its caches on, we skip that step. Bad things
happen when *another* vcpu tries to access that page with its own
caches disabled. At that point, there is no garantee that the
data has made it to the PoC, and we access stale data.

The obvious fix is to always flush to PoC when a page is faulted
in, no matter what the state of the vcpu is.

Fixes: 2d58b733c876 ("arm64: KVM: force cache clean on page fault when caches are off")
Reviewed-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 8f36ebaf21fdae99c091c67e8b6fab33969f2667 upstream.

When we fault in a page, we flush it to the PoC (Point of Coherency)
if the faulting vcpu has its own caches off, so that it can observe
the page we just brought it.

But if the vcpu has its caches on, we skip that step. Bad things
happen when *another* vcpu tries to access that page with its own
caches disabled. At that point, there is no garantee that the
data has made it to the PoC, and we access stale data.

The obvious fix is to always flush to PoC when a page is faulted
in, no matter what the state of the vcpu is.

Fixes: 2d58b733c876 ("arm64: KVM: force cache clean on page fault when caches are off")
Reviewed-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
</feed>
