diff options
author | Stephen Warren <swarren@nvidia.com> | 2011-01-05 14:24:12 -0700 |
---|---|---|
committer | Colin Cross <ccross@android.com> | 2011-01-09 19:18:04 -0800 |
commit | aa49ac169f2f7f6a827f42118ec17355682b7a7f (patch) | |
tree | b7199361c7b701e3d84336c87bc2c3e3ac22db13 /arch | |
parent | 178b6def88f0bc15a045ef7455cc7650d4deb859 (diff) |
ARM: tegra: Prevent requeuing in-progress DMA requests
If a request already in the queue is passed to tegra_dma_enqueue_req,
tegra_dma_req.node->{next,prev} will end up pointing to itself instead
of at tegra_dma_channel.list, which is the way a the end-of-list
should be set up. When the DMA request completes and is list_del'd,
the list head will still point at it, yet the node's next/prev will
contain the list poison values. When the next DMA request completes,
a kernel panic will occur when those poison values are dereferenced.
This makes the DMA driver more robust in the face of buggy clients.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Colin Cross <ccross@android.com>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/arm/mach-tegra/dma.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/arch/arm/mach-tegra/dma.c b/arch/arm/mach-tegra/dma.c index 0ac303ebf84c..db94fcf58399 100644 --- a/arch/arm/mach-tegra/dma.c +++ b/arch/arm/mach-tegra/dma.c @@ -327,6 +327,7 @@ int tegra_dma_enqueue_req(struct tegra_dma_channel *ch, struct tegra_dma_req *req) { unsigned long irq_flags; + struct tegra_dma_req *_req; int start_dma = 0; if (req->size > TEGRA_DMA_MAX_TRANSFER_SIZE || @@ -337,6 +338,13 @@ int tegra_dma_enqueue_req(struct tegra_dma_channel *ch, spin_lock_irqsave(&ch->lock, irq_flags); + list_for_each_entry(_req, &ch->list, node) { + if (req == _req) { + spin_unlock_irqrestore(&ch->lock, irq_flags); + return -EEXIST; + } + } + req->bytes_transferred = 0; req->status = 0; /* STATUS_EMPTY just means the DMA hasn't processed the buf yet. */ |