<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/include, branch v3.0.74</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>spinlocks and preemption points need to be at least compiler barriers</title>
<updated>2013-04-12T16:18:09+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2013-04-09T17:48:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=17229e4f8ef6a7cb514d7d4d67197cd6a8b06eca'/>
<id>17229e4f8ef6a7cb514d7d4d67197cd6a8b06eca</id>
<content type='text'>
commit 386afc91144b36b42117b0092893f15bc8798a80 upstream.

In UP and non-preempt respectively, the spinlocks and preemption
disable/enable points are stubbed out entirely, because there is no
regular code that can ever hit the kind of concurrency they are meant to
protect against.

However, while there is no regular code that can cause scheduling, we
_do_ end up having some exceptional (literally!) code that can do so,
and that we need to make sure does not ever get moved into the critical
region by the compiler.

In particular, get_user() and put_user() is generally implemented as
inline asm statements (even if the inline asm may then make a call
instruction to call out-of-line), and can obviously cause a page fault
and IO as a result.  If that inline asm has been scheduled into the
middle of a preemption-safe (or spinlock-protected) code region, we
obviously lose.

Now, admittedly this is *very* unlikely to actually ever happen, and
we've not seen examples of actual bugs related to this.  But partly
exactly because it's so hard to trigger and the resulting bug is so
subtle, we should be extra careful to get this right.

So make sure that even when preemption is disabled, and we don't have to
generate any actual *code* to explicitly tell the system that we are in
a preemption-disabled region, we need to at least tell the compiler not
to move things around the critical region.

This patch grew out of the same discussion that caused commits
79e5f05edcbf ("ARC: Add implicit compiler barrier to raw_local_irq*
functions") and 3e2e0d2c222b ("tile: comment assumption about
__insn_mtspr for &lt;asm/irqflags.h&gt;") to come about.

Note for stable: use discretion when/if applying this.  As mentioned,
this bug may never have actually bitten anybody, and gcc may never have
done the required code motion for it to possibly ever trigger in
practice.

Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Steven Rostedt &lt;srostedt@redhat.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 386afc91144b36b42117b0092893f15bc8798a80 upstream.

In UP and non-preempt respectively, the spinlocks and preemption
disable/enable points are stubbed out entirely, because there is no
regular code that can ever hit the kind of concurrency they are meant to
protect against.

However, while there is no regular code that can cause scheduling, we
_do_ end up having some exceptional (literally!) code that can do so,
and that we need to make sure does not ever get moved into the critical
region by the compiler.

In particular, get_user() and put_user() is generally implemented as
inline asm statements (even if the inline asm may then make a call
instruction to call out-of-line), and can obviously cause a page fault
and IO as a result.  If that inline asm has been scheduled into the
middle of a preemption-safe (or spinlock-protected) code region, we
obviously lose.

Now, admittedly this is *very* unlikely to actually ever happen, and
we've not seen examples of actual bugs related to this.  But partly
exactly because it's so hard to trigger and the resulting bug is so
subtle, we should be extra careful to get this right.

So make sure that even when preemption is disabled, and we don't have to
generate any actual *code* to explicitly tell the system that we are in
a preemption-disabled region, we need to at least tell the compiler not
to move things around the critical region.

This patch grew out of the same discussion that caused commits
79e5f05edcbf ("ARC: Add implicit compiler barrier to raw_local_irq*
functions") and 3e2e0d2c222b ("tile: comment assumption about
__insn_mtspr for &lt;asm/irqflags.h&gt;") to come about.

Note for stable: use discretion when/if applying this.  As mentioned,
this bug may never have actually bitten anybody, and gcc may never have
done the required code motion for it to possibly ever trigger in
practice.

Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Steven Rostedt &lt;srostedt@redhat.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>libata: Set max sector to 65535 for Slimtype DVD A DS8A8SH drive</title>
<updated>2013-04-12T16:18:09+00:00</updated>
<author>
<name>Shan Hai</name>
<email>shan.hai@windriver.com</email>
</author>
<published>2013-03-18T02:30:44+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=c55f9197cfc3d968c64ed4e7762214c04090426e'/>
<id>c55f9197cfc3d968c64ed4e7762214c04090426e</id>
<content type='text'>
commit a32450e127fc6e5ca6d958ceb3cfea4d30a00846 upstream.

The Slimtype DVD A  DS8A8SH drive locks up when max sector is smaller than
65535, and the blow backtrace is observed on locking up:

INFO: task flush-8:32:1130 blocked for more than 120 seconds.
"echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.
flush-8:32      D ffffffff8180cf60     0  1130      2 0x00000000
 ffff880273aef618 0000000000000046 0000000000000005 ffff880273aee000
 ffff880273aee000 ffff880273aeffd8 ffff880273aee010 ffff880273aee000
 ffff880273aeffd8 ffff880273aee000 ffff88026e842ea0 ffff880274a10000
Call Trace:
 [&lt;ffffffff8168fc2d&gt;] schedule+0x5d/0x70
 [&lt;ffffffff8168fccc&gt;] io_schedule+0x8c/0xd0
 [&lt;ffffffff81324461&gt;] get_request+0x731/0x7d0
 [&lt;ffffffff8133dc60&gt;] ? cfq_allow_merge+0x50/0x90
 [&lt;ffffffff81083aa0&gt;] ? wake_up_bit+0x40/0x40
 [&lt;ffffffff81320443&gt;] ? bio_attempt_back_merge+0x33/0x110
 [&lt;ffffffff813248ea&gt;] blk_queue_bio+0x23a/0x3f0
 [&lt;ffffffff81322176&gt;] generic_make_request+0xc6/0x120
 [&lt;ffffffff81322308&gt;] submit_bio+0x138/0x160
 [&lt;ffffffff811d7596&gt;] ? bio_alloc_bioset+0x96/0x120
 [&lt;ffffffff811d1f61&gt;] submit_bh+0x1f1/0x220
 [&lt;ffffffff811d48b8&gt;] __block_write_full_page+0x228/0x340
 [&lt;ffffffff811d3650&gt;] ? attach_nobh_buffers+0xc0/0xc0
 [&lt;ffffffff811d8960&gt;] ? I_BDEV+0x10/0x10
 [&lt;ffffffff811d8960&gt;] ? I_BDEV+0x10/0x10
 [&lt;ffffffff811d4ab6&gt;] block_write_full_page_endio+0xe6/0x100
 [&lt;ffffffff811d4ae5&gt;] block_write_full_page+0x15/0x20
 [&lt;ffffffff811d9268&gt;] blkdev_writepage+0x18/0x20
 [&lt;ffffffff81142527&gt;] __writepage+0x17/0x40
 [&lt;ffffffff811438ba&gt;] write_cache_pages+0x34a/0x4a0
 [&lt;ffffffff81142510&gt;] ? set_page_dirty+0x70/0x70
 [&lt;ffffffff81143a61&gt;] generic_writepages+0x51/0x80
 [&lt;ffffffff81143ab0&gt;] do_writepages+0x20/0x50
 [&lt;ffffffff811c9ed6&gt;] __writeback_single_inode+0xa6/0x2b0
 [&lt;ffffffff811ca861&gt;] writeback_sb_inodes+0x311/0x4d0
 [&lt;ffffffff811caaa6&gt;] __writeback_inodes_wb+0x86/0xd0
 [&lt;ffffffff811cad43&gt;] wb_writeback+0x1a3/0x330
 [&lt;ffffffff816916cf&gt;] ? _raw_spin_lock_irqsave+0x3f/0x50
 [&lt;ffffffff811b8362&gt;] ? get_nr_inodes+0x52/0x70
 [&lt;ffffffff811cb0ac&gt;] wb_do_writeback+0x1dc/0x260
 [&lt;ffffffff8168dd34&gt;] ? schedule_timeout+0x204/0x240
 [&lt;ffffffff811cb232&gt;] bdi_writeback_thread+0x102/0x2b0
 [&lt;ffffffff811cb130&gt;] ? wb_do_writeback+0x260/0x260
 [&lt;ffffffff81083550&gt;] kthread+0xc0/0xd0
 [&lt;ffffffff81083490&gt;] ? kthread_worker_fn+0x1b0/0x1b0
 [&lt;ffffffff8169a3ec&gt;] ret_from_fork+0x7c/0xb0
 [&lt;ffffffff81083490&gt;] ? kthread_worker_fn+0x1b0/0x1b0

 The above trace was triggered by
   "dd if=/dev/zero of=/dev/sr0 bs=2048 count=32768"

 It was previously working by accident, since another bug introduced
 by 4dce8ba94c7 (libata: Use 'bool' return value for ata_id_XXX) caused
 all drives to use maxsect=65535.

Signed-off-by: Shan Hai &lt;shan.hai@windriver.com&gt;
Signed-off-by: Jeff Garzik &lt;jgarzik@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit a32450e127fc6e5ca6d958ceb3cfea4d30a00846 upstream.

The Slimtype DVD A  DS8A8SH drive locks up when max sector is smaller than
65535, and the blow backtrace is observed on locking up:

INFO: task flush-8:32:1130 blocked for more than 120 seconds.
"echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.
flush-8:32      D ffffffff8180cf60     0  1130      2 0x00000000
 ffff880273aef618 0000000000000046 0000000000000005 ffff880273aee000
 ffff880273aee000 ffff880273aeffd8 ffff880273aee010 ffff880273aee000
 ffff880273aeffd8 ffff880273aee000 ffff88026e842ea0 ffff880274a10000
Call Trace:
 [&lt;ffffffff8168fc2d&gt;] schedule+0x5d/0x70
 [&lt;ffffffff8168fccc&gt;] io_schedule+0x8c/0xd0
 [&lt;ffffffff81324461&gt;] get_request+0x731/0x7d0
 [&lt;ffffffff8133dc60&gt;] ? cfq_allow_merge+0x50/0x90
 [&lt;ffffffff81083aa0&gt;] ? wake_up_bit+0x40/0x40
 [&lt;ffffffff81320443&gt;] ? bio_attempt_back_merge+0x33/0x110
 [&lt;ffffffff813248ea&gt;] blk_queue_bio+0x23a/0x3f0
 [&lt;ffffffff81322176&gt;] generic_make_request+0xc6/0x120
 [&lt;ffffffff81322308&gt;] submit_bio+0x138/0x160
 [&lt;ffffffff811d7596&gt;] ? bio_alloc_bioset+0x96/0x120
 [&lt;ffffffff811d1f61&gt;] submit_bh+0x1f1/0x220
 [&lt;ffffffff811d48b8&gt;] __block_write_full_page+0x228/0x340
 [&lt;ffffffff811d3650&gt;] ? attach_nobh_buffers+0xc0/0xc0
 [&lt;ffffffff811d8960&gt;] ? I_BDEV+0x10/0x10
 [&lt;ffffffff811d8960&gt;] ? I_BDEV+0x10/0x10
 [&lt;ffffffff811d4ab6&gt;] block_write_full_page_endio+0xe6/0x100
 [&lt;ffffffff811d4ae5&gt;] block_write_full_page+0x15/0x20
 [&lt;ffffffff811d9268&gt;] blkdev_writepage+0x18/0x20
 [&lt;ffffffff81142527&gt;] __writepage+0x17/0x40
 [&lt;ffffffff811438ba&gt;] write_cache_pages+0x34a/0x4a0
 [&lt;ffffffff81142510&gt;] ? set_page_dirty+0x70/0x70
 [&lt;ffffffff81143a61&gt;] generic_writepages+0x51/0x80
 [&lt;ffffffff81143ab0&gt;] do_writepages+0x20/0x50
 [&lt;ffffffff811c9ed6&gt;] __writeback_single_inode+0xa6/0x2b0
 [&lt;ffffffff811ca861&gt;] writeback_sb_inodes+0x311/0x4d0
 [&lt;ffffffff811caaa6&gt;] __writeback_inodes_wb+0x86/0xd0
 [&lt;ffffffff811cad43&gt;] wb_writeback+0x1a3/0x330
 [&lt;ffffffff816916cf&gt;] ? _raw_spin_lock_irqsave+0x3f/0x50
 [&lt;ffffffff811b8362&gt;] ? get_nr_inodes+0x52/0x70
 [&lt;ffffffff811cb0ac&gt;] wb_do_writeback+0x1dc/0x260
 [&lt;ffffffff8168dd34&gt;] ? schedule_timeout+0x204/0x240
 [&lt;ffffffff811cb232&gt;] bdi_writeback_thread+0x102/0x2b0
 [&lt;ffffffff811cb130&gt;] ? wb_do_writeback+0x260/0x260
 [&lt;ffffffff81083550&gt;] kthread+0xc0/0xd0
 [&lt;ffffffff81083490&gt;] ? kthread_worker_fn+0x1b0/0x1b0
 [&lt;ffffffff8169a3ec&gt;] ret_from_fork+0x7c/0xb0
 [&lt;ffffffff81083490&gt;] ? kthread_worker_fn+0x1b0/0x1b0

 The above trace was triggered by
   "dd if=/dev/zero of=/dev/sr0 bs=2048 count=32768"

 It was previously working by accident, since another bug introduced
 by 4dce8ba94c7 (libata: Use 'bool' return value for ata_id_XXX) caused
 all drives to use maxsect=65535.

Signed-off-by: Shan Hai &lt;shan.hai@windriver.com&gt;
Signed-off-by: Jeff Garzik &lt;jgarzik@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>libata: Use integer return value for atapi_command_packet_set</title>
<updated>2013-04-12T16:18:08+00:00</updated>
<author>
<name>Shan Hai</name>
<email>shan.hai@windriver.com</email>
</author>
<published>2013-03-18T02:30:43+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=878315cbf04dde3f93bf796f1835ae8d07604ba7'/>
<id>878315cbf04dde3f93bf796f1835ae8d07604ba7</id>
<content type='text'>
commit d8668fcb0b257d9fdcfbe5c172a99b8d85e1cd82 upstream.

The function returns type of ATAPI drives so it should return integer value.
The commit 4dce8ba94c7 (libata: Use 'bool' return value for ata_id_XXX) since
v2.6.39 changed the type of return value from int to bool, the change would
cause all of the ATAPI class drives to be treated as TYPE_TAPE and the
max_sectors of the drives to be set to 65535 because of the commit
f8d8e5799b7(libata: increase 128 KB / cmd limit for ATAPI tape drives), for the
function would return true for all ATAPI class drives and the TYPE_TAPE is
defined as 0x01.

Signed-off-by: Shan Hai &lt;shan.hai@windriver.com&gt;
Signed-off-by: Jeff Garzik &lt;jgarzik@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit d8668fcb0b257d9fdcfbe5c172a99b8d85e1cd82 upstream.

The function returns type of ATAPI drives so it should return integer value.
The commit 4dce8ba94c7 (libata: Use 'bool' return value for ata_id_XXX) since
v2.6.39 changed the type of return value from int to bool, the change would
cause all of the ATAPI class drives to be treated as TYPE_TAPE and the
max_sectors of the drives to be set to 65535 because of the commit
f8d8e5799b7(libata: increase 128 KB / cmd limit for ATAPI tape drives), for the
function would return true for all ATAPI class drives and the TYPE_TAPE is
defined as 0x01.

Signed-off-by: Shan Hai &lt;shan.hai@windriver.com&gt;
Signed-off-by: Jeff Garzik &lt;jgarzik@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>thermal: shorten too long mcast group name</title>
<updated>2013-04-05T17:16:53+00:00</updated>
<author>
<name>Masatake YAMATO</name>
<email>yamato@redhat.com</email>
</author>
<published>2013-04-01T18:50:40+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=0cbf0cbd285ef39202743ecfd62b4fe2dcdc81fd'/>
<id>0cbf0cbd285ef39202743ecfd62b4fe2dcdc81fd</id>
<content type='text'>
[ Upstream commits 73214f5d9f33b79918b1f7babddd5c8af28dd23d
  and f1e79e208076ffe7bad97158275f1c572c04f5c7, the latter
  adds an assertion to genetlink to prevent this from happening
  again in the future. ]

The original name is too long.

Signed-off-by: Masatake YAMATO &lt;yamato@redhat.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commits 73214f5d9f33b79918b1f7babddd5c8af28dd23d
  and f1e79e208076ffe7bad97158275f1c572c04f5c7, the latter
  adds an assertion to genetlink to prevent this from happening
  again in the future. ]

The original name is too long.

Signed-off-by: Masatake YAMATO &lt;yamato@redhat.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>KVM: Ensure all vcpus are consistent with in-kernel irqchip settings</title>
<updated>2013-04-05T17:16:38+00:00</updated>
<author>
<name>Avi Kivity</name>
<email>avi@redhat.com</email>
</author>
<published>2013-03-19T11:36:55+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=8868daebc1b6240d07d5c6428f8bc8631b2bed42'/>
<id>8868daebc1b6240d07d5c6428f8bc8631b2bed42</id>
<content type='text'>
commit 3e515705a1f46beb1c942bb8043c16f8ac7b1e9e upstream.

If some vcpus are created before KVM_CREATE_IRQCHIP, then
irqchip_in_kernel() and vcpu-&gt;arch.apic will be inconsistent, leading
to potential NULL pointer dereferences.

Fix by:
- ensuring that no vcpus are installed when KVM_CREATE_IRQCHIP is called
- ensuring that a vcpu has an apic if it is installed after KVM_CREATE_IRQCHIP

This is somewhat long winded because vcpu-&gt;arch.apic is created without
kvm-&gt;lock held.

Based on earlier patch by Michael Ellerman.

Signed-off-by: Michael Ellerman &lt;michael@ellerman.id.au&gt;
Signed-off-by: Avi Kivity &lt;avi@redhat.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 3e515705a1f46beb1c942bb8043c16f8ac7b1e9e upstream.

If some vcpus are created before KVM_CREATE_IRQCHIP, then
irqchip_in_kernel() and vcpu-&gt;arch.apic will be inconsistent, leading
to potential NULL pointer dereferences.

Fix by:
- ensuring that no vcpus are installed when KVM_CREATE_IRQCHIP is called
- ensuring that a vcpu has an apic if it is installed after KVM_CREATE_IRQCHIP

This is somewhat long winded because vcpu-&gt;arch.apic is created without
kvm-&gt;lock held.

Based on earlier patch by Michael Ellerman.

Signed-off-by: Michael Ellerman &lt;michael@ellerman.id.au&gt;
Signed-off-by: Avi Kivity &lt;avi@redhat.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>NFSv4: Fix an Oops in the NFSv4 getacl code</title>
<updated>2013-04-05T17:16:38+00:00</updated>
<author>
<name>Trond Myklebust</name>
<email>Trond.Myklebust@netapp.com</email>
</author>
<published>2013-03-19T11:36:53+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=01b140abad66f022ff6dff7cc1307b07281035fa'/>
<id>01b140abad66f022ff6dff7cc1307b07281035fa</id>
<content type='text'>
commit 331818f1c468a24e581aedcbe52af799366a9dfe upstream.

Commit bf118a342f10dafe44b14451a1392c3254629a1f (NFSv4: include bitmap
in nfsv4 get acl data) introduces the 'acl_scratch' page for the case
where we may need to decode multi-page data. However it fails to take
into account the fact that the variable may be NULL (for the case where
we're not doing multi-page decode), and it also attaches it to the
encoding xdr_stream rather than the decoding one.

The immediate result is an Oops in nfs4_xdr_enc_getacl due to the
call to page_address() with a NULL page pointer.

Signed-off-by: Trond Myklebust &lt;Trond.Myklebust@netapp.com&gt;
Cc: Andy Adamson &lt;andros@netapp.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 331818f1c468a24e581aedcbe52af799366a9dfe upstream.

Commit bf118a342f10dafe44b14451a1392c3254629a1f (NFSv4: include bitmap
in nfsv4 get acl data) introduces the 'acl_scratch' page for the case
where we may need to decode multi-page data. However it fails to take
into account the fact that the variable may be NULL (for the case where
we're not doing multi-page decode), and it also attaches it to the
encoding xdr_stream rather than the decoding one.

The immediate result is an Oops in nfs4_xdr_enc_getacl due to the
call to page_address() with a NULL page pointer.

Signed-off-by: Trond Myklebust &lt;Trond.Myklebust@netapp.com&gt;
Cc: Andy Adamson &lt;andros@netapp.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>NFSv4: include bitmap in nfsv4 get acl data</title>
<updated>2013-04-05T17:16:38+00:00</updated>
<author>
<name>Andy Adamson</name>
<email>andros@netapp.com</email>
</author>
<published>2013-03-19T11:36:52+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=c938c22b48302eeb9a6f3cc83f223f37d98ba6f7'/>
<id>c938c22b48302eeb9a6f3cc83f223f37d98ba6f7</id>
<content type='text'>
commit bf118a342f10dafe44b14451a1392c3254629a1f upstream.

The NFSv4 bitmap size is unbounded: a server can return an arbitrary
sized bitmap in an FATTR4_WORD0_ACL request.  Replace using the
nfs4_fattr_bitmap_maxsz as a guess to the maximum bitmask returned by a server
with the inclusion of the bitmap (xdr length plus bitmasks) and the acl data
xdr length to the (cached) acl page data.

This is a general solution to commit e5012d1f "NFSv4.1: update
nfs4_fattr_bitmap_maxsz" and fixes hitting a BUG_ON in xdr_shrink_bufhead
when getting ACLs.

Fix a bug in decode_getacl that returned -EINVAL on ACLs &gt; page when getxattr
was called with a NULL buffer, preventing ACL &gt; PAGE_SIZE from being retrieved.

Signed-off-by: Andy Adamson &lt;andros@netapp.com&gt;
Signed-off-by: Trond Myklebust &lt;Trond.Myklebust@netapp.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit bf118a342f10dafe44b14451a1392c3254629a1f upstream.

The NFSv4 bitmap size is unbounded: a server can return an arbitrary
sized bitmap in an FATTR4_WORD0_ACL request.  Replace using the
nfs4_fattr_bitmap_maxsz as a guess to the maximum bitmask returned by a server
with the inclusion of the bitmap (xdr length plus bitmasks) and the acl data
xdr length to the (cached) acl page data.

This is a general solution to commit e5012d1f "NFSv4.1: update
nfs4_fattr_bitmap_maxsz" and fixes hitting a BUG_ON in xdr_shrink_bufhead
when getting ACLs.

Fix a bug in decode_getacl that returned -EINVAL on ACLs &gt; page when getxattr
was called with a NULL buffer, preventing ACL &gt; PAGE_SIZE from being retrieved.

Signed-off-by: Andy Adamson &lt;andros@netapp.com&gt;
Signed-off-by: Trond Myklebust &lt;Trond.Myklebust@netapp.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>signal: Define __ARCH_HAS_SA_RESTORER so we know whether to clear sa_restorer</title>
<updated>2013-04-05T17:16:35+00:00</updated>
<author>
<name>Ben Hutchings</name>
<email>ben@decadent.org.uk</email>
</author>
<published>2012-11-26T03:24:19+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=1c190534db77a019936d95a1826a55bf34d7ed23'/>
<id>1c190534db77a019936d95a1826a55bf34d7ed23</id>
<content type='text'>
Vaguely based on upstream commit 574c4866e33d 'consolidate kernel-side
struct sigaction declarations'.

flush_signal_handlers() needs to know whether sigaction::sa_restorer
is defined, not whether SA_RESTORER is defined.  Define the
__ARCH_HAS_SA_RESTORER macro to indicate this.

Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Cc: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Vaguely based on upstream commit 574c4866e33d 'consolidate kernel-side
struct sigaction declarations'.

flush_signal_handlers() needs to know whether sigaction::sa_restorer
is defined, not whether SA_RESTORER is defined.  Define the
__ARCH_HAS_SA_RESTORER macro to indicate this.

Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Cc: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>exec: use -ELOOP for max recursion depth</title>
<updated>2013-03-28T19:06:14+00:00</updated>
<author>
<name>Kees Cook</name>
<email>keescook@chromium.org</email>
</author>
<published>2012-12-18T00:03:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=ea8d2d19ad17ceafc883b86e448a405cf7808927'/>
<id>ea8d2d19ad17ceafc883b86e448a405cf7808927</id>
<content type='text'>
commit d740269867021faf4ce38a449353d2b986c34a67 upstream.

To avoid an explosion of request_module calls on a chain of abusive
scripts, fail maximum recursion with -ELOOP instead of -ENOEXEC. As soon
as maximum recursion depth is hit, the error will fail all the way back
up the chain, aborting immediately.

This also has the side-effect of stopping the user's shell from attempting
to reexecute the top-level file as a shell script. As seen in the
dash source:

        if (cmd != path_bshell &amp;&amp; errno == ENOEXEC) {
                *argv-- = cmd;
                *argv = cmd = path_bshell;
                goto repeat;
        }

The above logic was designed for running scripts automatically that lacked
the "#!" header, not to re-try failed recursion. On a legitimate -ENOEXEC,
things continue to behave as the shell expects.

Additionally, when tracking recursion, the binfmt handlers should not be
involved. The recursion being tracked is the depth of calls through
search_binary_handler(), so that function should be exclusively responsible
for tracking the depth.

Signed-off-by: Kees Cook &lt;keescook@chromium.org&gt;
Cc: halfdog &lt;me@halfdog.net&gt;
Cc: P J P &lt;ppandit@redhat.com&gt;
Cc: Alexander Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit d740269867021faf4ce38a449353d2b986c34a67 upstream.

To avoid an explosion of request_module calls on a chain of abusive
scripts, fail maximum recursion with -ELOOP instead of -ENOEXEC. As soon
as maximum recursion depth is hit, the error will fail all the way back
up the chain, aborting immediately.

This also has the side-effect of stopping the user's shell from attempting
to reexecute the top-level file as a shell script. As seen in the
dash source:

        if (cmd != path_bshell &amp;&amp; errno == ENOEXEC) {
                *argv-- = cmd;
                *argv = cmd = path_bshell;
                goto repeat;
        }

The above logic was designed for running scripts automatically that lacked
the "#!" header, not to re-try failed recursion. On a legitimate -ENOEXEC,
things continue to behave as the shell expects.

Additionally, when tracking recursion, the binfmt handlers should not be
involved. The recursion being tracked is the depth of calls through
search_binary_handler(), so that function should be exclusively responsible
for tracking the depth.

Signed-off-by: Kees Cook &lt;keescook@chromium.org&gt;
Cc: halfdog &lt;me@halfdog.net&gt;
Cc: P J P &lt;ppandit@redhat.com&gt;
Cc: Alexander Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Cc: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>inet: limit length of fragment queue hash table bucket lists</title>
<updated>2013-03-28T19:05:59+00:00</updated>
<author>
<name>Hannes Frederic Sowa</name>
<email>hannes@stressinduktion.org</email>
</author>
<published>2013-03-15T11:32:30+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=7b7a1b8b3bd1742ca5ab259e741da0070e936db0'/>
<id>7b7a1b8b3bd1742ca5ab259e741da0070e936db0</id>
<content type='text'>
[ Upstream commit 5a3da1fe9561828d0ca7eca664b16ec2b9bf0055 ]

This patch introduces a constant limit of the fragment queue hash
table bucket list lengths. Currently the limit 128 is choosen somewhat
arbitrary and just ensures that we can fill up the fragment cache with
empty packets up to the default ip_frag_high_thresh limits. It should
just protect from list iteration eating considerable amounts of cpu.

If we reach the maximum length in one hash bucket a warning is printed.
This is implemented on the caller side of inet_frag_find to distinguish
between the different users of inet_fragment.c.

I dropped the out of memory warning in the ipv4 fragment lookup path,
because we already get a warning by the slab allocator.

Cc: Eric Dumazet &lt;eric.dumazet@gmail.com&gt;
Cc: Jesper Dangaard Brouer &lt;jbrouer@redhat.com&gt;
Signed-off-by: Hannes Frederic Sowa &lt;hannes@stressinduktion.org&gt;
Acked-by: Eric Dumazet &lt;edumazet@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit 5a3da1fe9561828d0ca7eca664b16ec2b9bf0055 ]

This patch introduces a constant limit of the fragment queue hash
table bucket list lengths. Currently the limit 128 is choosen somewhat
arbitrary and just ensures that we can fill up the fragment cache with
empty packets up to the default ip_frag_high_thresh limits. It should
just protect from list iteration eating considerable amounts of cpu.

If we reach the maximum length in one hash bucket a warning is printed.
This is implemented on the caller side of inet_frag_find to distinguish
between the different users of inet_fragment.c.

I dropped the out of memory warning in the ipv4 fragment lookup path,
because we already get a warning by the slab allocator.

Cc: Eric Dumazet &lt;eric.dumazet@gmail.com&gt;
Cc: Jesper Dangaard Brouer &lt;jbrouer@redhat.com&gt;
Signed-off-by: Hannes Frederic Sowa &lt;hannes@stressinduktion.org&gt;
Acked-by: Eric Dumazet &lt;edumazet@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
</feed>
