Age | Commit message (Collapse) | Author |
|
Change SOC conditionals to make them more forward-looking.
Change-Id: Ib60db4e690c2f396afdec962616d735548b5a8a9
Reviewed-on: http://git-master/r/32706
Reviewed-by: Niket Sirsi <nsirsi@nvidia.com>
Tested-by: Niket Sirsi <nsirsi@nvidia.com>
|
|
Adding the debug fs interface to watch the dma registers
from user space.
Change-Id: I42204b0fdd2aa201006c4cc96d2448aa24b98fc5
Reviewed-on: http://git-master/r/29624
Reviewed-by: Laxman Dewangan <ldewangan@nvidia.com>
Tested-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
|
|
By changing the dma allocation API to take the client name, it is easy
to track who is allocated the DMA channels when we run out of the
DMA channels.
Original-Change-Id: I016011cfd74089fed0da1bc0f121800017ce124a
Reviewed-on: http://git-master/r/28031
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
Change-Id: I048bcb87f95ee6d8ad2fdce993a1758dc5071666
|
|
Resetting the apb dma controller and enabling the clock of the
apb dma controller during kernel initialization.
Making necessarily entry for the apb dma clock in clock table.
Original-Change-Id: Ifaed5a70ed06b162a5015a2eae8bb444b43178c4
Reviewed-on: http://git-master/r/27873
Reviewed-by: Niket Sirsi <nsirsi@nvidia.com>
Tested-by: Niket Sirsi <nsirsi@nvidia.com>
Change-Id: If66c96a5e9cd015086f4d407ed9fc9bd99b6b29f
|
|
Change-Id: I2ffeaf6f8dfeb279b40ca6f69f6c9157401a746a
|
|
Setting burst size of dma based on the transfer size for the
client i2s for tegra3 architecture.
Setting burst to 4 word for tegra2 architecture.
bug 796817
Original-Change-Id: I6c9e4ab775fb23d51207084b231745fc7a4f60d8
Reviewed-on: http://git-master/r/21102
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Tested-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: Laxman Dewangan <ldewangan@nvidia.com>
Change-Id: Ib5c639bb33d2a05060ff62bc75a2e43f655310f9
|
|
Original-Change-Id: Ia098e22789f4817e14ac34de01f8d990b4b4d29b
Reviewed-on: http://git-master/r/15975
Tested-by: Scott Williams <scwilliams@nvidia.com>
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Change-Id: If9677ac2190d8d2266ee40d011f5841e97838522
|
|
Conflicts:
arch/arm/configs/tegra_defconfig
arch/arm/mach-tegra/Kconfig
arch/arm/mach-tegra/Makefile
arch/arm/mach-tegra/board-ventana-power.c
arch/arm/mach-tegra/board-ventana-sensors.c
arch/arm/mach-tegra/board-ventana.c
arch/arm/mach-tegra/clock.c
arch/arm/mach-tegra/common.c
arch/arm/mach-tegra/cpu-tegra.c
arch/arm/mach-tegra/fuse.c
arch/arm/mach-tegra/headsmp.S
arch/arm/mach-tegra/tegra2_dvfs.c
arch/arm/tools/mach-types
drivers/rtc/rtc-tegra.c
drivers/usb/gadget/fsl_udc_core.c
drivers/video/tegra/host/dev.c
drivers/video/tegra/host/nvhost_channel.c
drivers/video/tegra/host/nvhost_intr.c
Original-Change-Id: I1e9b6d0e761cf1e95cf90b78b5932b53fcb9bb5e
(cherry picked from commit 2f331e046f7c4cfc6ab54fca3193035b3bf3a14f)
Reviewed-on: http://git-master/r/14572
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Tested-by: Scott Williams <scwilliams@nvidia.com>
Change-Id: I29db8796b2e27a8d218c332de36f880a7cf4bcb2
|
|
Bug 764354
Original-Change-Id: I8a390eb4dae87dceacb97461f23d13554868b046
Reviewed-on: http://git-master/r/12228
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Tested-by: Scott Williams <scwilliams@nvidia.com>
Change-Id: I8e6b8303898796419fb5a759cd16edff9aeac081
|
|
This reverts commit b7f9c567e6b95074087584672773a167224608d3.
tegra_dma_dequeue() is needed by tegra_hsuart, spi and spi slave.
|
|
Conflicts:
drivers/serial/tegra_hsuart.c
drivers/usb/host/ehci-tegra.c
Change-Id: Ief6c03a63615a41e85de59ad14dedef309d0b2fb
|
|
Calling the complete callback when a request is cancelled leads to
locking problems in the callback, which could be called from an IRQ
with no locks held, or from whatever context called
tegra_dma_dequeue_req. Instead, expect the caller to handle the
now-cancelled request as needed.
Also removes tegra_dma_dequeue, since all users can be trivially
converted to tegra_dma_dequeue_req.
Change-Id: If699239c09c78d1cd3afa0eaad46535b1d401a24
Signed-off-by: Colin Cross <ccross@android.com>
|
|
Adding api for getting the amount of data trsnaferred by dma.
Change-Id: I348b8a2f0f855165fb1bf74f0d9013faa97056e7
Reviewed-on: http://git-master/r/20377
Tested-by: Sumit Bhattacharya <sumitb@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
Conflicts:
arch/arm/mach-tegra/fuse.c
drivers/misc/Makefile
Change-Id: I300b925d78b31efe00c342190d8dbd50e2e81230
|
|
If a request already in the queue is passed to tegra_dma_enqueue_req,
tegra_dma_req.node->{next,prev} will end up pointing to itself instead
of at tegra_dma_channel.list, which is the way a the end-of-list
should be set up. When the DMA request completes and is list_del'd,
the list head will still point at it, yet the node's next/prev will
contain the list poison values. When the next DMA request completes,
a kernel panic will occur when those poison values are dereferenced.
This makes the DMA driver more robust in the face of buggy clients.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Colin Cross <ccross@android.com>
|
|
Source files should not be with executable permission.
Change-Id: I70b6be4cf88fea4be9b092ca2f5dd08e40ee7cbd
Reviewed-on: http://git-master/r/12081
Reviewed-by: Chao Jiang <chaoj@nvidia.com>
Tested-by: Chao Jiang <chaoj@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
Change-Id: I1312ec33ba8bac38dc395d7d1a2f485b13d74c14
|
|
for spi/slink, depending on transfer size,
burst size can be set to 1, 4, or 8.
bug 747979
Change-Id: Ieae0285d374e7d0eb6c2c2e633f8cafbb2b51b3a
Reviewed-on: http://git-master/r/12076
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
"Interrupt during enqueue" happens periodically when the
DMA is almost starving. This happens under certain not-
uncommon scenarios.
Signed-off-by: Iliyan Malchev <malchev@google.com>
|
|
for spi/slink, set dma burst size based on
transfer size.
bug 747979
Change-Id: I8c3c0a0410648a25190847590b9ac0304fb1105f
Reviewed-on: http://git-master/r/11752
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
- Added "single buffer continuous DMA" mode in addition to the
"double buffer continuous DMA" mode that is already implemented
- Changed the queuing of next buffer to be more flexible for
continuous DMA. It can now get in-flight right after a transfer
starts, or whenever the client enqueues a buffer.
Signed-off-by: Iliyan Malchev <malchev@google.com>
|
|
Signed-off-by: Iliyan Malchev <malchev@google.com>
|
|
NV_DMA_MAX_TRASFER_SIZE --> TEGRA_DMA_MAX_TRANSFER_SIZE
Signed-off-by: Iliyan Malchev <malchev@google.com>
|
|
Signed-off-by: Iliyan Malchev <malchev@google.com>
|
|
Stopping Dma after last req transfer.
add an API to return the completed transfer count of a pending, active
or finished DMA request
originally fixed by Gary King <gking@nvidia.com>
It is observed that the dma interrupt has the lower priority then
its client interupt priority. When client's isr calls dma get transfer,
the dma status has not been upated as dma isr have not been served yet.
So before reading the status, explicitly checking the interrupt status and
handling accordingly.
The another issue which is observed is that if dma has transferred the data
of amount = requested -4 and if it moves to invalid requestor before stopping
then status got reset and tarnsfered bytes becomes 0. This seems the apb dma
hw behavior. Following is the suggestion to overcome this issue:
- Disable global enable bit.
- Read status.
- Stop dma.
- Enable global status bit.
Added this workaround and it worked fine.
originally fixed by Laxman Dewangan <ldewangan@nvidia.com>
In continous mode, dma should stop after last transfer completed and
if there is no more req pending.
If there is pending req then it should check whether it has updated
in hw for next transfer or not and if it has not started then stop dma
and start new req immediatley.
originally fixed by Laxman Dewangan <ldewangan@nvidia.com>
Change-Id: I49c97c96eacdf4060de6b21cec0e71d940d33f00
|
|
Print an error message when a DMA channel cannot be allocated.
Change-Id: I93a96851ac12c5ea66b2fb053033aa4260c2178a
Signed-off-by: Mike Corrigan <michael.corrigan@motorola.com>
|
|
Signed-off-by: Iliyan Malchev <malchev@google.com>
|
|
Sometimes, due to high interrupt latency in the continuous mode
of DMA transfer, the half buffer complete interrupt is handled
after DMA has transferred the full buffer. When this is detected,
stop DMA immediately and restart with the next buffer if the next
buffer is ready.
originally fixed by Victor(Weiguo) Pan <wpan@nvidia.com>
In place of using the simple spin_lock()/spi_unlock() in the
interrupt thread, using the spin_lock_irqsave() and
spin_unlock_irqrestore(). The lock is shared between the normal
process context and interrupt context.
originally fixed by Laxman Dewangan (ldewangan@nvidia.com)
The use of shadow registers caused memory corruption at physical
address 0 because the enable bit was not shadowed, and assuming it
needed to be set would enable an unconfigured dma block. Most of the
register accesses don't need to know the previous state of the
registers, and the few places that do need to modify only a few bits
in the registers are the same ones that were sometimes incorrectly
setting the enable bit. This patch convert tegra_dma_update_hardware
to set the entire register, and the other users to read-modify-write,
and drops the shadow registers completely.
Also fixes missing locking in tegra_dma_allocate_channel
Signed-off-by: Colin Cross <ccross@android.com>
|
|
Change-Id: If14c826e8919f5de11331a5c45994fe7e451330a
Signed-off-by: Colin Cross <ccross@android.com>
|
|
The APB DMA block handles DMA transfers to and from some peripherals
in the Tegra SOC. It reads from sequential addresses on the memory
bus, and writes repeatedly to the same address on the APB bus.
Two transfer modes are supported, oneshot for transferring a known
size to or from a peripheral, and continuous for streaming data.
In continuous mode, a callback occurs when the buffer is half full
to allow the existing data to be handled and a new request queued.x
v2 changes:
dma API no longer uses PTR_ERR
Signed-off-by: Erik Gilling <konkers@android.com>
Signed-off-by: Colin Cross <ccross@android.com>
|