<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/arch, branch v4.2.5</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>arm64: Fix THP protection change logic</title>
<updated>2015-10-27T00:53:43+00:00</updated>
<author>
<name>Steve Capper</name>
<email>steve.capper@linaro.org</email>
</author>
<published>2015-10-01T12:06:07+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=dba4e9c12c31050ffe6438b60157de48cc7f3c62'/>
<id>dba4e9c12c31050ffe6438b60157de48cc7f3c62</id>
<content type='text'>
commit 1a541b4e3cd6f5795022514114854b3e1345f24e upstream.

6910fa1 ("arm64: enable PTE type bit in the mask for pte_modify") fixes
a problem whereby a large block of PROT_NONE mapped memory is
incorrectly mapped as block descriptors when mprotect is called.

Unfortunately, a subtle bug was introduced by this fix to the THP logic.

If one mmaps a large block of memory, then faults it such that it is
collapsed into THPs; resulting calls to mprotect on this area of memory
will lead to incorrect table descriptors being written instead of block
descriptors. This is because pmd_modify calls pte_modify which is now
allowed to modify the type of the page table entry.

This patch reverts commit 6910fa16dbe142f6a0fd0fd7c249f9883ff7fc8a, and
fixes the problem it was trying to address by adjusting PAGE_NONE to
represent a table entry. Thus no change in pte type is required when
moving from PROT_NONE to a different protection.

Fixes: 6910fa16dbe1 ("arm64: enable PTE type bit in the mask for pte_modify")
Cc: &lt;stable@vger.kernel.org&gt; # 4.0+
Cc: Feng Kan &lt;fkan@apm.com&gt;
Reported-by: Ganapatrao Kulkarni &lt;Ganapatrao.Kulkarni@caviumnetworks.com&gt;
Tested-by: Ganapatrao Kulkarni &lt;gkulkarni@caviumnetworks.com&gt;
Reviewed-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
[SteveC: backported 1a541b4e3cd6f5795022514114854b3e1345f24e to 4.1 and
 4.2 stable. Just one minor fix to second part to allow patch to apply
cleanly, no logic changed.]
Signed-off-by: Steve Capper &lt;steve.capper@linaro.org&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 1a541b4e3cd6f5795022514114854b3e1345f24e upstream.

6910fa1 ("arm64: enable PTE type bit in the mask for pte_modify") fixes
a problem whereby a large block of PROT_NONE mapped memory is
incorrectly mapped as block descriptors when mprotect is called.

Unfortunately, a subtle bug was introduced by this fix to the THP logic.

If one mmaps a large block of memory, then faults it such that it is
collapsed into THPs; resulting calls to mprotect on this area of memory
will lead to incorrect table descriptors being written instead of block
descriptors. This is because pmd_modify calls pte_modify which is now
allowed to modify the type of the page table entry.

This patch reverts commit 6910fa16dbe142f6a0fd0fd7c249f9883ff7fc8a, and
fixes the problem it was trying to address by adjusting PAGE_NONE to
represent a table entry. Thus no change in pte type is required when
moving from PROT_NONE to a different protection.

Fixes: 6910fa16dbe1 ("arm64: enable PTE type bit in the mask for pte_modify")
Cc: &lt;stable@vger.kernel.org&gt; # 4.0+
Cc: Feng Kan &lt;fkan@apm.com&gt;
Reported-by: Ganapatrao Kulkarni &lt;Ganapatrao.Kulkarni@caviumnetworks.com&gt;
Tested-by: Ganapatrao Kulkarni &lt;gkulkarni@caviumnetworks.com&gt;
Reviewed-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
[SteveC: backported 1a541b4e3cd6f5795022514114854b3e1345f24e to 4.1 and
 4.2 stable. Just one minor fix to second part to allow patch to apply
cleanly, no logic changed.]
Signed-off-by: Steve Capper &lt;steve.capper@linaro.org&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: x86: fix RSM into 64-bit protected mode</title>
<updated>2015-10-27T00:53:40+00:00</updated>
<author>
<name>Paolo Bonzini</name>
<email>pbonzini@redhat.com</email>
</author>
<published>2015-10-14T13:25:52+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=03900dd5e4e08e3617d82c8b62bf5ed48d2508cd'/>
<id>03900dd5e4e08e3617d82c8b62bf5ed48d2508cd</id>
<content type='text'>
commit b10d92a54dac25a6152f1aa1ffc95c12908035ce upstream.

In order to get into 64-bit protected mode, you need to enable
paging while EFER.LMA=1.  For this to work, CS.L must be 0.
Currently, we load the segments before CR0 and CR4, which means
that if RSM returns into 64-bit protected mode CS.L is already 1
and everything breaks.

Luckily, CS.L=0 is always the case when executing RSM, because it
is forbidden to execute RSM from 64-bit protected mode.  Hence it
is enough to load CR0 and CR4 first, and only then the segments.

Fixes: 660a5d517aaab9187f93854425c4c63f4a09195c
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit b10d92a54dac25a6152f1aa1ffc95c12908035ce upstream.

In order to get into 64-bit protected mode, you need to enable
paging while EFER.LMA=1.  For this to work, CS.L must be 0.
Currently, we load the segments before CR0 and CR4, which means
that if RSM returns into 64-bit protected mode CS.L is already 1
and everything breaks.

Luckily, CS.L=0 is always the case when executing RSM, because it
is forbidden to execute RSM from 64-bit protected mode.  Hence it
is enough to load CR0 and CR4 first, and only then the segments.

Fixes: 660a5d517aaab9187f93854425c4c63f4a09195c
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: x86: fix SMI to halted VCPU</title>
<updated>2015-10-27T00:53:40+00:00</updated>
<author>
<name>Paolo Bonzini</name>
<email>pbonzini@redhat.com</email>
</author>
<published>2015-10-13T08:19:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=0c2d5845347acbb2c140bd3e75baa8cd4a368949'/>
<id>0c2d5845347acbb2c140bd3e75baa8cd4a368949</id>
<content type='text'>
commit 73917739334c6509833b0403b81d4a04a8784bdf upstream.

An SMI to a halted VCPU must wake it up, hence a VCPU with a pending
SMI must be considered runnable.

Fixes: 64d6067057d9658acb8675afcfba549abdb7fc16
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 73917739334c6509833b0403b81d4a04a8784bdf upstream.

An SMI to a halted VCPU must wake it up, hence a VCPU with a pending
SMI must be considered runnable.

Fixes: 64d6067057d9658acb8675afcfba549abdb7fc16
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: x86: clean up kvm_arch_vcpu_runnable</title>
<updated>2015-10-27T00:53:39+00:00</updated>
<author>
<name>Paolo Bonzini</name>
<email>pbonzini@redhat.com</email>
</author>
<published>2015-10-13T08:18:53+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=ea83b307d539fe48c25cbfc8e1705673de21edc6'/>
<id>ea83b307d539fe48c25cbfc8e1705673de21edc6</id>
<content type='text'>
commit 5d9bc648b94dd719022343b8675e6c4381f0c45f upstream.

Split the huge conditional in two functions.

Fixes: 64d6067057d9658acb8675afcfba549abdb7fc16
Reviewed-by: Radim Krčmář &lt;rkrcmar@redhat.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 5d9bc648b94dd719022343b8675e6c4381f0c45f upstream.

Split the huge conditional in two functions.

Fixes: 64d6067057d9658acb8675afcfba549abdb7fc16
Reviewed-by: Radim Krčmář &lt;rkrcmar@redhat.com&gt;
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: ux500: simplify secondary CPU boot</title>
<updated>2015-10-27T00:53:38+00:00</updated>
<author>
<name>Linus Walleij</name>
<email>linus.walleij@linaro.org</email>
</author>
<published>2015-08-03T07:26:52+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=f245eaa43ccd64e51a8e40a93d8a2e3b28a13b33'/>
<id>f245eaa43ccd64e51a8e40a93d8a2e3b28a13b33</id>
<content type='text'>
commit c00def71efd919e8ae835a25f4f4c80a4b2d36d3 upstream.

This removes a lot of ancient cruft from the Ux500 SMP boot.
Instead of the pen grab/release, just point the ROM to
secondary_boot() and start the second CPU there, then send
the IPI.

Use our own SMP enable method. This enables us to remove the
last static mapping and get both CPUs booting properly.

Tested this and it just works.

Signed-off-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Olof Johansson &lt;olof@lixom.net&gt;
Signed-off-by: Kevin Hilman &lt;khilman@linaro.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit c00def71efd919e8ae835a25f4f4c80a4b2d36d3 upstream.

This removes a lot of ancient cruft from the Ux500 SMP boot.
Instead of the pen grab/release, just point the ROM to
secondary_boot() and start the second CPU there, then send
the IPI.

Use our own SMP enable method. This enables us to remove the
last static mapping and get both CPUs booting properly.

Tested this and it just works.

Signed-off-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Olof Johansson &lt;olof@lixom.net&gt;
Signed-off-by: Kevin Hilman &lt;khilman@linaro.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: errata: use KBUILD_CFLAGS_MODULE for erratum #843419</title>
<updated>2015-10-27T00:53:38+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2015-10-08T10:11:17+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=c8913ff16f29cef7a4b74b5be6d74ae52d10ff8c'/>
<id>c8913ff16f29cef7a4b74b5be6d74ae52d10ff8c</id>
<content type='text'>
commit b6dd8e0719c0d2d01429639a11b7bc2677de240c upstream.

Commit df057cc7b4fa ("arm64: errata: add module build workaround for
erratum #843419") sets CFLAGS_MODULE to ensure that the large memory
model is used by the compiler when building kernel modules.

However, CFLAGS_MODULE is an environment variable and intended to be
overridden on the command line, which appears to be the case with the
Ubuntu kernel packaging system, so use KBUILD_CFLAGS_MODULE instead.

Cc: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Fixes: df057cc7b4fa ("arm64: errata: add module build workaround for erratum #843419")
Reported-by: Dann Frazier &lt;dann.frazier@canonical.com&gt;
Tested-by: Dann Frazier &lt;dann.frazier@canonical.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit b6dd8e0719c0d2d01429639a11b7bc2677de240c upstream.

Commit df057cc7b4fa ("arm64: errata: add module build workaround for
erratum #843419") sets CFLAGS_MODULE to ensure that the large memory
model is used by the compiler when building kernel modules.

However, CFLAGS_MODULE is an environment variable and intended to be
overridden on the command line, which appears to be the case with the
Ubuntu kernel packaging system, so use KBUILD_CFLAGS_MODULE instead.

Cc: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Fixes: df057cc7b4fa ("arm64: errata: add module build workaround for erratum #843419")
Reported-by: Dann Frazier &lt;dann.frazier@canonical.com&gt;
Tested-by: Dann Frazier &lt;dann.frazier@canonical.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>crypto: camellia_aesni_avx - Fix CPU feature checks</title>
<updated>2015-10-27T00:53:37+00:00</updated>
<author>
<name>Ben Hutchings</name>
<email>ben@decadent.org.uk</email>
</author>
<published>2015-10-06T11:31:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=74ca3884072bb2a3980331b7996b110320bcd68c'/>
<id>74ca3884072bb2a3980331b7996b110320bcd68c</id>
<content type='text'>
commit 92b279070dd6c94265db32748bbeb5b583588de9 upstream.

We need to explicitly check the AVX and AES CPU features, as we can't
infer them from the related XSAVE feature flags.  For example, the
Core i3 2310M passes the XSAVE feature test but does not implement
AES-NI.

Reported-and-tested-by: Stéphane Glondu &lt;glondu@debian.org&gt;
References: https://bugs.debian.org/800934
Fixes: ce4f5f9b65ae ("x86/fpu, crypto x86/camellia_aesni_avx: Simplify...")
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 92b279070dd6c94265db32748bbeb5b583588de9 upstream.

We need to explicitly check the AVX and AES CPU features, as we can't
infer them from the related XSAVE feature flags.  For example, the
Core i3 2310M passes the XSAVE feature test but does not implement
AES-NI.

Reported-and-tested-by: Stéphane Glondu &lt;glondu@debian.org&gt;
References: https://bugs.debian.org/800934
Fixes: ce4f5f9b65ae ("x86/fpu, crypto x86/camellia_aesni_avx: Simplify...")
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>crypto: sparc - initialize blkcipher.ivsize</title>
<updated>2015-10-27T00:53:37+00:00</updated>
<author>
<name>Dave Kleikamp</name>
<email>dave.kleikamp@oracle.com</email>
</author>
<published>2015-10-05T15:08:51+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=4a371439ba97d44de98e8d0623f0181bbe464bcc'/>
<id>4a371439ba97d44de98e8d0623f0181bbe464bcc</id>
<content type='text'>
commit a66d7f724a96d6fd279bfbd2ee488def6b081bea upstream.

Some of the crypto algorithms write to the initialization vector,
but no space has been allocated for it. This clobbers adjacent memory.

Signed-off-by: Dave Kleikamp &lt;dave.kleikamp@oracle.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit a66d7f724a96d6fd279bfbd2ee488def6b081bea upstream.

Some of the crypto algorithms write to the initialization vector,
but no space has been allocated for it. This clobbers adjacent memory.

Signed-off-by: Dave Kleikamp &lt;dave.kleikamp@oracle.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>sched/preempt, powerpc, kvm: Use need_resched() instead of should_resched()</title>
<updated>2015-10-22T21:49:35+00:00</updated>
<author>
<name>Konstantin Khlebnikov</name>
<email>khlebnikov@yandex-team.ru</email>
</author>
<published>2015-07-15T09:52:03+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=945f4ef0193ed39d83f2adc9bb4e0e045584416f'/>
<id>945f4ef0193ed39d83f2adc9bb4e0e045584416f</id>
<content type='text'>
commit c56dadf39761a6157239cac39e3988998c994f98 upstream.

Function should_resched() is equal to (!preempt_count() &amp;&amp; need_resched()).
In preemptive kernel preempt_count here is non-zero because of vc-&gt;lock.

Signed-off-by: Konstantin Khlebnikov &lt;khlebnikov@yandex-team.ru&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Alexander Graf &lt;agraf@suse.de&gt;
Cc: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Cc: David Vrabel &lt;david.vrabel@citrix.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Cc: Paul Mackerras &lt;paulus@samba.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Link: http://lkml.kernel.org/r/20150715095203.12246.72922.stgit@buzz
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Guenter Roeck &lt;linux@roeck-us.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit c56dadf39761a6157239cac39e3988998c994f98 upstream.

Function should_resched() is equal to (!preempt_count() &amp;&amp; need_resched()).
In preemptive kernel preempt_count here is non-zero because of vc-&gt;lock.

Signed-off-by: Konstantin Khlebnikov &lt;khlebnikov@yandex-team.ru&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Alexander Graf &lt;agraf@suse.de&gt;
Cc: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Cc: David Vrabel &lt;david.vrabel@citrix.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Cc: Paul Mackerras &lt;paulus@samba.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Link: http://lkml.kernel.org/r/20150715095203.12246.72922.stgit@buzz
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Guenter Roeck &lt;linux@roeck-us.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>sched/preempt: Fix cond_resched_lock() and cond_resched_softirq()</title>
<updated>2015-10-22T21:49:35+00:00</updated>
<author>
<name>Konstantin Khlebnikov</name>
<email>khlebnikov@yandex-team.ru</email>
</author>
<published>2015-07-15T09:52:04+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=29b8eaffcba906afc07374022f43351903b94cee'/>
<id>29b8eaffcba906afc07374022f43351903b94cee</id>
<content type='text'>
commit fe32d3cd5e8eb0f82e459763374aa80797023403 upstream.

These functions check should_resched() before unlocking spinlock/bh-enable:
preempt_count always non-zero =&gt; should_resched() always returns false.
cond_resched_lock() worked iff spin_needbreak is set.

This patch adds argument "preempt_offset" to should_resched().

preempt_count offset constants for that:

  PREEMPT_DISABLE_OFFSET  - offset after preempt_disable()
  PREEMPT_LOCK_OFFSET     - offset after spin_lock()
  SOFTIRQ_DISABLE_OFFSET  - offset after local_bh_distable()
  SOFTIRQ_LOCK_OFFSET     - offset after spin_lock_bh()

Signed-off-by: Konstantin Khlebnikov &lt;khlebnikov@yandex-team.ru&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Alexander Graf &lt;agraf@suse.de&gt;
Cc: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Cc: David Vrabel &lt;david.vrabel@citrix.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Cc: Paul Mackerras &lt;paulus@samba.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Fixes: bdb438065890 ("sched: Extract the basic add/sub preempt_count modifiers")
Link: http://lkml.kernel.org/r/20150715095204.12246.98268.stgit@buzz
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit fe32d3cd5e8eb0f82e459763374aa80797023403 upstream.

These functions check should_resched() before unlocking spinlock/bh-enable:
preempt_count always non-zero =&gt; should_resched() always returns false.
cond_resched_lock() worked iff spin_needbreak is set.

This patch adds argument "preempt_offset" to should_resched().

preempt_count offset constants for that:

  PREEMPT_DISABLE_OFFSET  - offset after preempt_disable()
  PREEMPT_LOCK_OFFSET     - offset after spin_lock()
  SOFTIRQ_DISABLE_OFFSET  - offset after local_bh_distable()
  SOFTIRQ_LOCK_OFFSET     - offset after spin_lock_bh()

Signed-off-by: Konstantin Khlebnikov &lt;khlebnikov@yandex-team.ru&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Cc: Alexander Graf &lt;agraf@suse.de&gt;
Cc: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Cc: David Vrabel &lt;david.vrabel@citrix.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Cc: Paul Mackerras &lt;paulus@samba.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Fixes: bdb438065890 ("sched: Extract the basic add/sub preempt_count modifiers")
Link: http://lkml.kernel.org/r/20150715095204.12246.98268.stgit@buzz
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
</feed>
