<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/drivers/dma, branch v3.12.25</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>dmaengine: dw: went back to plain {request,free}_irq() calls</title>
<updated>2014-06-09T13:53:50+00:00</updated>
<author>
<name>Andy Shevchenko</name>
<email>andriy.shevchenko@linux.intel.com</email>
</author>
<published>2014-05-07T07:56:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=ae5b411abd7a36af90aa0313e3e992b065eac944'/>
<id>ae5b411abd7a36af90aa0313e3e992b065eac944</id>
<content type='text'>
commit 97977f7576a89cb9436c000ae703c0d515e748ac upstream.

The commit dbde5c29 "dw_dmac: use devm_* functions to simplify code" turns
probe function to use devm_* helpers and simultaneously brings a regression. We
need to ensure irq is disabled, followed by ensuring that don't schedule any
more tasklets and then its safe to use tasklet_kill().

The free_irq() will ensure that the irq is disabled and also wait till all
scheduled interrupts are executed by invoking synchronize_irq(). So we need to
only do tasklet_kill() after invoking free_irq().

Signed-off-by: Andy Shevchenko &lt;andriy.shevchenko@linux.intel.com&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 97977f7576a89cb9436c000ae703c0d515e748ac upstream.

The commit dbde5c29 "dw_dmac: use devm_* functions to simplify code" turns
probe function to use devm_* helpers and simultaneously brings a regression. We
need to ensure irq is disabled, followed by ensuring that don't schedule any
more tasklets and then its safe to use tasklet_kill().

The free_irq() will ensure that the irq is disabled and also wait till all
scheduled interrupts are executed by invoking synchronize_irq(). So we need to
only do tasklet_kill() after invoking free_irq().

Signed-off-by: Andy Shevchenko &lt;andriy.shevchenko@linux.intel.com&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dma: mv_xor: Flush descriptors before activating a channel</title>
<updated>2014-06-09T13:53:49+00:00</updated>
<author>
<name>Ezequiel Garcia</name>
<email>ezequiel.garcia@free-electrons.com</email>
</author>
<published>2014-05-21T21:02:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=ac0dc6aaaf89fb8a924bf5a2e83f55a82e76a527'/>
<id>ac0dc6aaaf89fb8a924bf5a2e83f55a82e76a527</id>
<content type='text'>
commit 5a9a55bf9157d3490b0c8c4c81d4708602c26e07 upstream.

We need to use writel() instead of writel_relaxed() when starting
a channel, to ensure all the descriptors have been flushed before
the activation.

While at it, remove the unneeded read-modify-write and make the
code simpler.

Signed-off-by: Lior Amsalem &lt;alior@marvell.com&gt;
Signed-off-by: Ezequiel Garcia &lt;ezequiel.garcia@free-electrons.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 5a9a55bf9157d3490b0c8c4c81d4708602c26e07 upstream.

We need to use writel() instead of writel_relaxed() when starting
a channel, to ensure all the descriptors have been flushed before
the activation.

While at it, remove the unneeded read-modify-write and make the
code simpler.

Signed-off-by: Lior Amsalem &lt;alior@marvell.com&gt;
Signed-off-by: Ezequiel Garcia &lt;ezequiel.garcia@free-electrons.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dma: edma: fix incorrect SG list handling</title>
<updated>2014-05-15T07:56:15+00:00</updated>
<author>
<name>Sekhar Nori</name>
<email>nsekhar@ti.com</email>
</author>
<published>2014-03-19T05:55:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=4d22eb083cf667129e9158b407dda5588c89c64a'/>
<id>4d22eb083cf667129e9158b407dda5588c89c64a</id>
<content type='text'>
commit 5fc68a6cad658e45dca3e0a6607df3a8e5df4ef9 upstream.

The code to handle any length SG lists calls edma_resume()
even before edma_start() is called. This is incorrect
because edma_resume() enables edma events on the channel
after which CPU (in edma_start) cannot clear posted
events by writing to ECR (per the EDMA user's guide).

Because of this EDMA transfers fail to start if due
to some reason there is a pending EDMA event registered
even before EDMA transfers are started. This can happen if
an EDMA event is a byproduct of device initialization.

Fix this by calling edma_resume() only if it is not the
first batch of MAX_NR_SG elements.

Without this patch, MMC/SD fails to function on DA850 EVM
with DMA. The behaviour is triggered by specific IP and
this can explain why the issue was not reported before
(example with MMC/SD on AM335x).

Tested on DA850 EVM and AM335x EVM-SK using MMC/SD card.

Cc: Joel Fernandes &lt;joelf@ti.com&gt;
Acked-by: Joel Fernandes &lt;joelf@ti.com&gt;
Tested-by: Jon Ringle &lt;jringle@gridpoint.com&gt;
Tested-by: Alexander Holler &lt;holler@ahsoftware.de&gt;
Reported-by: Jon Ringle &lt;jringle@gridpoint.com&gt;
Signed-off-by: Sekhar Nori &lt;nsekhar@ti.com&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 5fc68a6cad658e45dca3e0a6607df3a8e5df4ef9 upstream.

The code to handle any length SG lists calls edma_resume()
even before edma_start() is called. This is incorrect
because edma_resume() enables edma events on the channel
after which CPU (in edma_start) cannot clear posted
events by writing to ECR (per the EDMA user's guide).

Because of this EDMA transfers fail to start if due
to some reason there is a pending EDMA event registered
even before EDMA transfers are started. This can happen if
an EDMA event is a byproduct of device initialization.

Fix this by calling edma_resume() only if it is not the
first batch of MAX_NR_SG elements.

Without this patch, MMC/SD fails to function on DA850 EVM
with DMA. The behaviour is triggered by specific IP and
this can explain why the issue was not reported before
(example with MMC/SD on AM335x).

Tested on DA850 EVM and AM335x EVM-SK using MMC/SD card.

Cc: Joel Fernandes &lt;joelf@ti.com&gt;
Acked-by: Joel Fernandes &lt;joelf@ti.com&gt;
Tested-by: Jon Ringle &lt;jringle@gridpoint.com&gt;
Tested-by: Alexander Holler &lt;holler@ahsoftware.de&gt;
Reported-by: Jon Ringle &lt;jringle@gridpoint.com&gt;
Signed-off-by: Sekhar Nori &lt;nsekhar@ti.com&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dma: ste_dma40: don't dereference free:d descriptor</title>
<updated>2014-03-05T16:13:55+00:00</updated>
<author>
<name>Linus Walleij</name>
<email>linus.walleij@linaro.org</email>
</author>
<published>2014-02-13T09:39:01+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=a5a928314b25f7b5c08629a2993807e998c90224'/>
<id>a5a928314b25f7b5c08629a2993807e998c90224</id>
<content type='text'>
commit e9baa9d9d520fb0e24cca671e430689de2d4a4b2 upstream.

It appears that in the DMA40 driver the DMA tasklet will very
often dereference memory for a descriptor just free:d from the
DMA40 slab. Nothing happens because no other part of the driver
has yet had a chance to claim this memory, but it's really
nasty to dereference free:d memory, so let's check the flag
before the descriptor is free and store it in a bool variable.

Reported-by: Dan Carpenter &lt;dan.carpenter@oracle.com&gt;
Signed-off-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit e9baa9d9d520fb0e24cca671e430689de2d4a4b2 upstream.

It appears that in the DMA40 driver the DMA tasklet will very
often dereference memory for a descriptor just free:d from the
DMA40 slab. Nothing happens because no other part of the driver
has yet had a chance to claim this memory, but it's really
nasty to dereference free:d memory, so let's check the flag
before the descriptor is free and store it in a bool variable.

Reported-by: Dan Carpenter &lt;dan.carpenter@oracle.com&gt;
Signed-off-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>ioat: fix tasklet tear down</title>
<updated>2014-03-05T16:13:53+00:00</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2014-02-20T00:19:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=fea84e6b1cbfec4e124e06633a0d6febe7fad69a'/>
<id>fea84e6b1cbfec4e124e06633a0d6febe7fad69a</id>
<content type='text'>
commit da87ca4d4ca101f177fffd84f1f0a5e4c0343557 upstream.

Since commit 77873803363c "net_dma: mark broken" we no longer pin dma
engines active for the network-receive-offload use case.  As a result
the -&gt;free_chan_resources() that occurs after the driver self test no
longer has a NET_DMA induced -&gt;alloc_chan_resources() to back it up.  A
late firing irq can lead to ksoftirqd spinning indefinitely due to the
tasklet_disable() performed by -&gt;free_chan_resources().  Only
-&gt;alloc_chan_resources() can clear this condition in affected kernels.

This problem has been present since commit 3e037454bcfa "I/OAT: Add
support for MSI and MSI-X" in 2.6.24, but is now exposed. Given the
NET_DMA use case is deprecated we can revisit moving the driver to use
threaded irqs.  For now, just tear down the irq and tasklet properly by:

1/ Disable the irq from triggering the tasklet

2/ Disable the irq from re-arming

3/ Flush inflight interrupts

4/ Flush the timer

5/ Flush inflight tasklets

References:
https://lkml.org/lkml/2014/1/27/282
https://lkml.org/lkml/2014/2/19/672

Cc: Ingo Molnar &lt;mingo@elte.hu&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Reported-by: Mike Galbraith &lt;bitbucket@online.de&gt;
Reported-by: Stanislav Fomichev &lt;stfomichev@yandex-team.ru&gt;
Tested-by: Mike Galbraith &lt;bitbucket@online.de&gt;
Tested-by: Stanislav Fomichev &lt;stfomichev@yandex-team.ru&gt;
Reviewed-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit da87ca4d4ca101f177fffd84f1f0a5e4c0343557 upstream.

Since commit 77873803363c "net_dma: mark broken" we no longer pin dma
engines active for the network-receive-offload use case.  As a result
the -&gt;free_chan_resources() that occurs after the driver self test no
longer has a NET_DMA induced -&gt;alloc_chan_resources() to back it up.  A
late firing irq can lead to ksoftirqd spinning indefinitely due to the
tasklet_disable() performed by -&gt;free_chan_resources().  Only
-&gt;alloc_chan_resources() can clear this condition in affected kernels.

This problem has been present since commit 3e037454bcfa "I/OAT: Add
support for MSI and MSI-X" in 2.6.24, but is now exposed. Given the
NET_DMA use case is deprecated we can revisit moving the driver to use
threaded irqs.  For now, just tear down the irq and tasklet properly by:

1/ Disable the irq from triggering the tasklet

2/ Disable the irq from re-arming

3/ Flush inflight interrupts

4/ Flush the timer

5/ Flush inflight tasklets

References:
https://lkml.org/lkml/2014/1/27/282
https://lkml.org/lkml/2014/2/19/672

Cc: Ingo Molnar &lt;mingo@elte.hu&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Reported-by: Mike Galbraith &lt;bitbucket@online.de&gt;
Reported-by: Stanislav Fomichev &lt;stfomichev@yandex-team.ru&gt;
Tested-by: Mike Galbraith &lt;bitbucket@online.de&gt;
Tested-by: Stanislav Fomichev &lt;stfomichev@yandex-team.ru&gt;
Reviewed-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>net_dma: mark broken</title>
<updated>2014-01-09T20:25:10+00:00</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2013-12-17T18:09:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=0ce5352593096740d03285378cc3c6c01655ecf0'/>
<id>0ce5352593096740d03285378cc3c6c01655ecf0</id>
<content type='text'>
commit 77873803363c9e831fc1d1e6895c084279090c22 upstream.

net_dma can cause data to be copied to a stale mapping if a
copy-on-write fault occurs during dma.  The application sees missing
data.

The following trace is triggered by modifying the kernel to WARN if it
ever triggers copy-on-write on a page that is undergoing dma:

 WARNING: CPU: 24 PID: 2529 at lib/dma-debug.c:485 debug_dma_assert_idle+0xd2/0x120()
 ioatdma 0000:00:04.0: DMA-API: cpu touching an active dma mapped page [pfn=0x16bcd9]
 Modules linked in: iTCO_wdt iTCO_vendor_support ioatdma lpc_ich pcspkr dca
 CPU: 24 PID: 2529 Comm: linbug Tainted: G        W    3.13.0-rc1+ #353
  00000000000001e5 ffff88016f45f688 ffffffff81751041 ffff88017ab0ef70
  ffff88016f45f6d8 ffff88016f45f6c8 ffffffff8104ed9c ffffffff810f3646
  ffff8801768f4840 0000000000000282 ffff88016f6cca10 00007fa2bb699349
 Call Trace:
  [&lt;ffffffff81751041&gt;] dump_stack+0x46/0x58
  [&lt;ffffffff8104ed9c&gt;] warn_slowpath_common+0x8c/0xc0
  [&lt;ffffffff810f3646&gt;] ? ftrace_pid_func+0x26/0x30
  [&lt;ffffffff8104ee86&gt;] warn_slowpath_fmt+0x46/0x50
  [&lt;ffffffff8139c062&gt;] debug_dma_assert_idle+0xd2/0x120
  [&lt;ffffffff81154a40&gt;] do_wp_page+0xd0/0x790
  [&lt;ffffffff811582ac&gt;] handle_mm_fault+0x51c/0xde0
  [&lt;ffffffff813830b9&gt;] ? copy_user_enhanced_fast_string+0x9/0x20
  [&lt;ffffffff8175fc2c&gt;] __do_page_fault+0x19c/0x530
  [&lt;ffffffff8175c196&gt;] ? _raw_spin_lock_bh+0x16/0x40
  [&lt;ffffffff810f3539&gt;] ? trace_clock_local+0x9/0x10
  [&lt;ffffffff810fa1f4&gt;] ? rb_reserve_next_event+0x64/0x310
  [&lt;ffffffffa0014c00&gt;] ? ioat2_dma_prep_memcpy_lock+0x60/0x130 [ioatdma]
  [&lt;ffffffff8175ffce&gt;] do_page_fault+0xe/0x10
  [&lt;ffffffff8175c862&gt;] page_fault+0x22/0x30
  [&lt;ffffffff81643991&gt;] ? __kfree_skb+0x51/0xd0
  [&lt;ffffffff813830b9&gt;] ? copy_user_enhanced_fast_string+0x9/0x20
  [&lt;ffffffff81388ea2&gt;] ? memcpy_toiovec+0x52/0xa0
  [&lt;ffffffff8164770f&gt;] skb_copy_datagram_iovec+0x5f/0x2a0
  [&lt;ffffffff8169d0f4&gt;] tcp_rcv_established+0x674/0x7f0
  [&lt;ffffffff816a68c5&gt;] tcp_v4_do_rcv+0x2e5/0x4a0
  [..]
 ---[ end trace e30e3b01191b7617 ]---
 Mapped at:
  [&lt;ffffffff8139c169&gt;] debug_dma_map_page+0xb9/0x160
  [&lt;ffffffff8142bf47&gt;] dma_async_memcpy_pg_to_pg+0x127/0x210
  [&lt;ffffffff8142cce9&gt;] dma_memcpy_pg_to_iovec+0x119/0x1f0
  [&lt;ffffffff81669d3c&gt;] dma_skb_copy_datagram_iovec+0x11c/0x2b0
  [&lt;ffffffff8169d1ca&gt;] tcp_rcv_established+0x74a/0x7f0:

...the problem is that the receive path falls back to cpu-copy in
several locations and this trace is just one of the areas.  A few
options were considered to fix this:

1/ sync all dma whenever a cpu copy branch is taken

2/ modify the page fault handler to hold off while dma is in-flight

Option 1 adds yet more cpu overhead to an "offload" that struggles to compete
with cpu-copy.  Option 2 adds checks for behavior that is already documented as
broken when using get_user_pages().  At a minimum a debug mode is warranted to
catch and flag these violations of the dma-api vs get_user_pages().

Thanks to David for his reproducer.

Cc: Dave Jiang &lt;dave.jiang@intel.com&gt;
Cc: Vinod Koul &lt;vinod.koul@intel.com&gt;
Cc: Alexander Duyck &lt;alexander.h.duyck@intel.com&gt;
Reported-by: David Whipple &lt;whipple@securedatainnovations.ch&gt;
Acked-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 77873803363c9e831fc1d1e6895c084279090c22 upstream.

net_dma can cause data to be copied to a stale mapping if a
copy-on-write fault occurs during dma.  The application sees missing
data.

The following trace is triggered by modifying the kernel to WARN if it
ever triggers copy-on-write on a page that is undergoing dma:

 WARNING: CPU: 24 PID: 2529 at lib/dma-debug.c:485 debug_dma_assert_idle+0xd2/0x120()
 ioatdma 0000:00:04.0: DMA-API: cpu touching an active dma mapped page [pfn=0x16bcd9]
 Modules linked in: iTCO_wdt iTCO_vendor_support ioatdma lpc_ich pcspkr dca
 CPU: 24 PID: 2529 Comm: linbug Tainted: G        W    3.13.0-rc1+ #353
  00000000000001e5 ffff88016f45f688 ffffffff81751041 ffff88017ab0ef70
  ffff88016f45f6d8 ffff88016f45f6c8 ffffffff8104ed9c ffffffff810f3646
  ffff8801768f4840 0000000000000282 ffff88016f6cca10 00007fa2bb699349
 Call Trace:
  [&lt;ffffffff81751041&gt;] dump_stack+0x46/0x58
  [&lt;ffffffff8104ed9c&gt;] warn_slowpath_common+0x8c/0xc0
  [&lt;ffffffff810f3646&gt;] ? ftrace_pid_func+0x26/0x30
  [&lt;ffffffff8104ee86&gt;] warn_slowpath_fmt+0x46/0x50
  [&lt;ffffffff8139c062&gt;] debug_dma_assert_idle+0xd2/0x120
  [&lt;ffffffff81154a40&gt;] do_wp_page+0xd0/0x790
  [&lt;ffffffff811582ac&gt;] handle_mm_fault+0x51c/0xde0
  [&lt;ffffffff813830b9&gt;] ? copy_user_enhanced_fast_string+0x9/0x20
  [&lt;ffffffff8175fc2c&gt;] __do_page_fault+0x19c/0x530
  [&lt;ffffffff8175c196&gt;] ? _raw_spin_lock_bh+0x16/0x40
  [&lt;ffffffff810f3539&gt;] ? trace_clock_local+0x9/0x10
  [&lt;ffffffff810fa1f4&gt;] ? rb_reserve_next_event+0x64/0x310
  [&lt;ffffffffa0014c00&gt;] ? ioat2_dma_prep_memcpy_lock+0x60/0x130 [ioatdma]
  [&lt;ffffffff8175ffce&gt;] do_page_fault+0xe/0x10
  [&lt;ffffffff8175c862&gt;] page_fault+0x22/0x30
  [&lt;ffffffff81643991&gt;] ? __kfree_skb+0x51/0xd0
  [&lt;ffffffff813830b9&gt;] ? copy_user_enhanced_fast_string+0x9/0x20
  [&lt;ffffffff81388ea2&gt;] ? memcpy_toiovec+0x52/0xa0
  [&lt;ffffffff8164770f&gt;] skb_copy_datagram_iovec+0x5f/0x2a0
  [&lt;ffffffff8169d0f4&gt;] tcp_rcv_established+0x674/0x7f0
  [&lt;ffffffff816a68c5&gt;] tcp_v4_do_rcv+0x2e5/0x4a0
  [..]
 ---[ end trace e30e3b01191b7617 ]---
 Mapped at:
  [&lt;ffffffff8139c169&gt;] debug_dma_map_page+0xb9/0x160
  [&lt;ffffffff8142bf47&gt;] dma_async_memcpy_pg_to_pg+0x127/0x210
  [&lt;ffffffff8142cce9&gt;] dma_memcpy_pg_to_iovec+0x119/0x1f0
  [&lt;ffffffff81669d3c&gt;] dma_skb_copy_datagram_iovec+0x11c/0x2b0
  [&lt;ffffffff8169d1ca&gt;] tcp_rcv_established+0x74a/0x7f0:

...the problem is that the receive path falls back to cpu-copy in
several locations and this trace is just one of the areas.  A few
options were considered to fix this:

1/ sync all dma whenever a cpu copy branch is taken

2/ modify the page fault handler to hold off while dma is in-flight

Option 1 adds yet more cpu overhead to an "offload" that struggles to compete
with cpu-copy.  Option 2 adds checks for behavior that is already documented as
broken when using get_user_pages().  At a minimum a debug mode is warranted to
catch and flag these violations of the dma-api vs get_user_pages().

Thanks to David for his reproducer.

Cc: Dave Jiang &lt;dave.jiang@intel.com&gt;
Cc: Vinod Koul &lt;vinod.koul@intel.com&gt;
Cc: Alexander Duyck &lt;alexander.h.duyck@intel.com&gt;
Reported-by: David Whipple &lt;whipple@securedatainnovations.ch&gt;
Acked-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>ioatdma: fix selection of 16 vs 8 source path</title>
<updated>2013-12-04T19:05:37+00:00</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2013-11-13T18:37:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=a009902c0d2e86c28cb76a2d4e54d12f098a8ef8'/>
<id>a009902c0d2e86c28cb76a2d4e54d12f098a8ef8</id>
<content type='text'>
commit 21e96c7313486390c694919522a76dfea0a86c59 upstream.

When performing continuations there are implied sources that need to be
added to the source count. Quoting dma_set_maxpq:

/* dma_maxpq - reduce maxpq in the face of continued operations
 * @dma - dma device with PQ capability
 * @flags - to check if DMA_PREP_CONTINUE and DMA_PREP_PQ_DISABLE_P are set
 *
 * When an engine does not support native continuation we need 3 extra
 * source slots to reuse P and Q with the following coefficients:
 * 1/ {00} * P : remove P from Q', but use it as a source for P'
 * 2/ {01} * Q : use Q to continue Q' calculation
 * 3/ {00} * Q : subtract Q from P' to cancel (2)
 *
 * In the case where P is disabled we only need 1 extra source:
 * 1/ {01} * Q : use Q to continue Q' calculation
 */

...fix the selection of the 16 source path to take these implied sources
into account.

Note this also kills the BUG_ON(src_cnt &lt; 9) check in
__ioat3_prep_pq16_lock().  Besides not accounting for implied sources
the check is redundant given we already made the path selection.

Cc: Dave Jiang &lt;dave.jiang@intel.com&gt;
Acked-by: Dave Jiang &lt;dave.jiang@intel.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 21e96c7313486390c694919522a76dfea0a86c59 upstream.

When performing continuations there are implied sources that need to be
added to the source count. Quoting dma_set_maxpq:

/* dma_maxpq - reduce maxpq in the face of continued operations
 * @dma - dma device with PQ capability
 * @flags - to check if DMA_PREP_CONTINUE and DMA_PREP_PQ_DISABLE_P are set
 *
 * When an engine does not support native continuation we need 3 extra
 * source slots to reuse P and Q with the following coefficients:
 * 1/ {00} * P : remove P from Q', but use it as a source for P'
 * 2/ {01} * Q : use Q to continue Q' calculation
 * 3/ {00} * Q : subtract Q from P' to cancel (2)
 *
 * In the case where P is disabled we only need 1 extra source:
 * 1/ {01} * Q : use Q to continue Q' calculation
 */

...fix the selection of the 16 source path to take these implied sources
into account.

Note this also kills the BUG_ON(src_cnt &lt; 9) check in
__ioat3_prep_pq16_lock().  Besides not accounting for implied sources
the check is redundant given we already made the path selection.

Cc: Dave Jiang &lt;dave.jiang@intel.com&gt;
Acked-by: Dave Jiang &lt;dave.jiang@intel.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>ioatdma: fix sed pool selection</title>
<updated>2013-12-04T19:05:36+00:00</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2013-11-13T18:15:42+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=1bb87f93deb4958aecbd4d60b62f8426f297646b'/>
<id>1bb87f93deb4958aecbd4d60b62f8426f297646b</id>
<content type='text'>
commit 5d48b9b5d80e3aa38a5161565398b1e48a650573 upstream.

The array to lookup the sed pool based on the number of sources
(pq16_idx_to_sedi) is 16 entries and expects a max source index.
However, we pass the total source count which runs off the end of the
array when src_cnt == 16.  The minimal fix is to just pass src_cnt-1,
but given we know the source count is &gt; 8 we can just calculate the sed
pool by (src_cnt - 2) &gt;&gt; 3.

Cc: Dave Jiang &lt;dave.jiang@intel.com&gt;
Acked-by: Dave Jiang &lt;dave.jiang@intel.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 5d48b9b5d80e3aa38a5161565398b1e48a650573 upstream.

The array to lookup the sed pool based on the number of sources
(pq16_idx_to_sedi) is 16 entries and expects a max source index.
However, we pass the total source count which runs off the end of the
array when src_cnt == 16.  The minimal fix is to just pass src_cnt-1,
but given we know the source count is &gt; 8 we can just calculate the sed
pool by (src_cnt - 2) &gt;&gt; 3.

Cc: Dave Jiang &lt;dave.jiang@intel.com&gt;
Acked-by: Dave Jiang &lt;dave.jiang@intel.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>ioatdma: Fix bug in selftest after removal of DMA_MEMSET.</title>
<updated>2013-12-04T19:05:36+00:00</updated>
<author>
<name>Dave Jiang</name>
<email>dave.jiang@intel.com</email>
</author>
<published>2013-11-06T15:50:09+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=b9c4b04c214848774fbbee4d946aa39ecb8f1010'/>
<id>b9c4b04c214848774fbbee4d946aa39ecb8f1010</id>
<content type='text'>
commit ac7d631f7d9f9e4e6116c4a72b6308067d0a2226 upstream.

Commit 48a9db4 (3.11) removed the memset op in the xor selftest for ioatdma.
The issue is that with the removal of that op, it never replaced the memset
with a CPU memset. The memory being operated on is expected to be zeroes but
was not. This is causing the xor selftest to fail.

Signed-off-by: Dave Jiang &lt;dave.jiang@intel.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit ac7d631f7d9f9e4e6116c4a72b6308067d0a2226 upstream.

Commit 48a9db4 (3.11) removed the memset op in the xor selftest for ioatdma.
The issue is that with the removal of that op, it never replaced the memset
with a CPU memset. The memory being operated on is expected to be zeroes but
was not. This is causing the xor selftest to fail.

Signed-off-by: Dave Jiang &lt;dave.jiang@intel.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>dmaengine: edma: fix another memory leak</title>
<updated>2013-10-24T16:47:50+00:00</updated>
<author>
<name>Vinod Koul</name>
<email>vinod.koul@intel.com</email>
</author>
<published>2013-10-24T16:47:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=7261828776b33ff434837674413df2920e9ca2ff'/>
<id>7261828776b33ff434837674413df2920e9ca2ff</id>
<content type='text'>
commit 4b6271a6 fix a menory leak but one more existed in driver so fix that

Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 4b6271a6 fix a menory leak but one more existed in driver so fix that

Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
