diff options
author | Gary King <gking@nvidia.com> | 2010-06-25 18:39:58 -0700 |
---|---|---|
committer | Dan Willemsen <dwillemsen@nvidia.com> | 2011-11-30 21:34:51 -0800 |
commit | e8912a3d50cc0da8f99b3fb868a063ea1fed3193 (patch) | |
tree | d557dcc24befc06928ffb16aa8b829e0473fb738 /arch/arm/mach-tegra/iovmm-gart.c | |
parent | e2daf24d99c0975f332f364721aa2d00181b6c37 (diff) |
[ARM] tegra: add I/O virtual memory manager interface (iovmm)
The Tegra IOVMM is an interface to allow device drivers and subsystems in
the kernel to manage the virtual memory spaces visible to I/O devices.
The interface has been designed to be scalable to allow for I/O virtual
memory hardware which exists in one or more limited apertures of the address
space (e.g., a small aperture in physical address space which can perform
MMU-like remapping) up to complete virtual addressing with multiple
address spaces and memory protection.
The interface has been designed to be similar to the Linux virtual memory
system; however, operations which would be difficult to implement or
nonsensical for DMA devices (e.g., copy-on-write) are not present, and
APIs have been added to allow for management of multiple simultaneous
active address spaces.
The API is broken into four principal objects: areas, clients, domains and
devices.
Areas
=====
An area is a contiguous region of the virtual address space which can be
filled with virtual-to-physical translations (and, optionally, protection
attributes). The virtual address of the area can be queried and used for
DMA operations by the client which created it.
As with the Linux vm_area structures, it is the responsibility of whichever
code creates an area to ensure that it is populated with appropriate
translations.
Domains
=======
A domain in the IOVMM system is similar to a process in a standard CPU
virtual memory system; it represents the entire range of virtual addresses
which may be allocated and used for translation. Depending on hardware
capabilities, one or more domains may be resident and available for
translation. IOVMM areas are allocated from IOVMM domains.
Whenever a DMA operation is performed to or from an IOVMM area, its parent
domain must be made resident prior to commencing the operation.
Clients
=======
I/O VMM clients represent any entity which needs to be able to allocate
and map system memory into I/O virtual space. Clients are created by name
and may be created as part of a "share group," where all clients created
in the same share group will observe the same I/O virtual space (i.e., all
will use the same IOVMM domain). This is similar to threads inside a process
in the CPU virtual memory manager.
The callers of the I/O VMM system are responsible for deciding on the
granularity of client creation and share group definition; depending on the
specific usage model expected by the caller, it may be appropriate to create
an IOVMM client per task (if the caller represents an ioctl'able interface
to user land), an IOVMM client per driver instance, a common IOVMM client
for an entire bus, or a global IOVMM client for an OS subsystem (e.g., the DMA
mapping interface).
Each client is responsible for ensuring that its IOVMM client's translation is
resident on the system prior to performing DMA operations using the IOVMM
addresses. This is accomplished by preceding all DMA operations for the client
with a call to tegra_iovmm_client_lock (or tegra_iovmm_client_trylock),
and following all operations (once complete) with a call to
tegra_iovmm_client_unlock. In this regard, clients are cooperatively context-
switched, and are expected to behave appropriately.
Devices
=======
I/O VMM devices are the physical hardware which is responsible for performing
the I/O virtual-to-physical translation.
Devices are responsible for domain management: the mapping and unmapping
operations needed to make translations resident in the domain (including
any TLB shootdown or cache invalidation needed to ensure coherency), locking
and unlocking domains as they are made resident by clients into the devices'
address space(s), and allocating and deallocating the domain objects.
Devices are responsible for the allocation and deallocation of domains to
allow coalescing of multiple client share groups into a single domain. For
example, if the device's hardware only allows a single address space to
be translated system-wide, performing full flushes and invalidates of the
translation at every client switch may be prohibitively expensive. In these
circumstances, a legal implementation of the IOVMM interface includes
returning the same domain for all clients on the system (regardless of
the originally-specified share group).
In this respect, a client can be assured that it will share an address space
with all of the other clients in its share group; however, it may also share
this address space with other clients, too.
Multiple devices may be present in a system; a device should return a NULL
domain if it is incapable of servicing the client when it is asked to
allocate a domain.
----------------------------------------------------------------------------
IOVMM Client API
================
tegra_iovmm_alloc_client - Called to create a new IOVMM client object; the
implementation may create a new domain or return an existing one depending on
both the device and the share group.
tegra_iovmm_free_client - Frees a client.
tegra_iovmm_client_lock - Makes a client's translations resident in the IOVMM
device for subsequent DMA operations. May block if the device is incapable
of context-switching the client when it is called. Returns -EINTR if the
waiting thread is interrupted before the client is locked.
tegra_iovmm_client_trylock - Non-blocking version of tegra_iovmm_client_lock
tegra_iovmm_client_unlock - Called by clients after DMA operations on IOVMM-
translated addresses is complete; allows IOVMM system to context-switch the
current client out of the device if needed.
tegra_iovmm_create_vm - Called to allocate an IOVMM area. If
lazy / demand-loading of pages is desired, clients should supply a pointer
to a tegra_iovmm_area_ops structure providing callback functions to load, pin
and unpin the physical pages which will be mapped into this IOVMM region.
tegra_iovmm_get_vm_size - Called to query the total size of an IOVMM client
tegra_iovmm_free_vm - Called to free a IOVMM area, releasing any pinned
physical pages mapped by it and to decommit any resources (memory for
PTEs / PDEs) required by the VM area.
tegra_iovmm_vm_insert_pfn - Called to insert an exact pfn (system memory
physical page) into the area at a specific virtual address. Illegal to call
if the IOVMM area was originally created with lazy / demand-loading.
tegra_iovmm_zap_vm - Called to mark all mappings in the IOVMM area as
invalid / no-access, but continues to consume the I/O virtual address space.
For lazy / demand-loaded IOVMM areas, a zapped region will not be reloaded
until it has been unzapped; DMA operations using the affected translations
may fault (if supported by the device).
tegra_iovmm_unzap_vm - Called to re-enable lazy / demand-loading of pages
for a previously-zapped IOVMM area.
tegra_iovmm_find_area_get - Called to find the IOVMM area object
corresponding to the specified I/O virtual address, or NULL if the address
is not allocated in the client's address space. Increases the reference count
on the IOVMM area object
tegra_iovmm_area_get - Called to increase the reference count on the IOVMM
area object
tegra_iovmm_area_put - Called to decrease the reference count on the IOVMM
area object
IOVMM Device API
================
tegra_iovmm_register - Called to register a new IOVMM device with the IOVMM
manager
tegra_iovmm_unregister - Called to remove an IOVMM device from the IOVMM
manager (unspecified behavior if called while a translation is active and / or
in-use)
tegra_iovmm_domain_init - Called to initialize all of the IOVMM manager's
data structures (block trees, etc.) after allocating a new domain
IOVMM Device HAL
================
map - Called to inform the device about a new lazy-mapped IOVMM area. Devices
may load the entire VM area when this is called, or at any time prior to
the completion of the first read or write operation using the translation.
unmap - Called to zap or to decommit translations
map_pfn - Called to insert a specific virtual-to-physical translation in the
IOVMM area
lock_domain - Called to make a domain resident; should return 0 if the
domain was successfully context-switched, non-zero if the operation can
not be completed (e.g., all available simultaneous hardware translations are
locked). If the device can guarantee that every domain it allocates is
always usable, this function may be NULL.
unlock_domain - Releases a domain from residency, allows the hardware
translation to be used by other domains.
alloc_domain - Called to allocate a new domain; allowed to return an
existing domain
free_domain - Called to free a domain.
Change-Id: Ic65788777b7aba50ee323fe16fd553ce66c4b87c
Signed-off-by: Gary King <gking@nvidia.com>
Diffstat (limited to 'arch/arm/mach-tegra/iovmm-gart.c')
-rw-r--r-- | arch/arm/mach-tegra/iovmm-gart.c | 351 |
1 files changed, 351 insertions, 0 deletions
diff --git a/arch/arm/mach-tegra/iovmm-gart.c b/arch/arm/mach-tegra/iovmm-gart.c new file mode 100644 index 000000000000..ef052e29b4f8 --- /dev/null +++ b/arch/arm/mach-tegra/iovmm-gart.c @@ -0,0 +1,351 @@ +/* + * arch/arm/mach-tegra/iovmm-gart.c + * + * Tegra I/O VMM implementation for GART devices in Tegra and Tegra 2 series + * systems-on-a-chip. + * + * Copyright (c) 2010, NVIDIA Corporation. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. + */ + +#include <linux/module.h> +#include <linux/platform_device.h> +#include <linux/spinlock.h> +#include <linux/slab.h> +#include <linux/vmalloc.h> +#include <linux/mm.h> +#include <asm/io.h> +#include <asm/cacheflush.h> + +#include <mach/iovmm.h> + +#if defined(CONFIG_ARCH_TEGRA_2x_SOC) +#define GART_CONFIG 0x24 +#define GART_ENTRY_ADDR 0x28 +#define GART_ENTRY_DATA 0x2c +#endif + +#define VMM_NAME "iovmm-gart" +#define DRIVER_NAME "tegra_gart" + +#define GART_PAGE_SHIFT (12) +#define GART_PAGE_MASK (~((1<<GART_PAGE_SHIFT)-1)) + +struct gart_device { + void __iomem *regs; + u32 *savedata; + u32 page_count; /* total remappable size */ + tegra_iovmm_addr_t iovmm_base; /* offset to apply to vmm_area */ + spinlock_t pte_lock; + struct tegra_iovmm_device iovmm; + struct tegra_iovmm_domain domain; + bool enable; + bool needs_barrier; /* emulator WAR */ +}; + +static int gart_map(struct tegra_iovmm_device *, struct tegra_iovmm_area *); +static void gart_unmap(struct tegra_iovmm_device *, + struct tegra_iovmm_area *, bool); +static void gart_map_pfn(struct tegra_iovmm_device *, + struct tegra_iovmm_area *, tegra_iovmm_addr_t, unsigned long); +static struct tegra_iovmm_domain *gart_alloc_domain( + struct tegra_iovmm_device *, struct tegra_iovmm_client *); + +static int gart_probe(struct platform_device *); +static int gart_remove(struct platform_device *); +static int gart_suspend(struct tegra_iovmm_device *dev); +static void gart_resume(struct tegra_iovmm_device *dev); + + +static struct tegra_iovmm_device_ops tegra_iovmm_gart_ops = { + .map = gart_map, + .unmap = gart_unmap, + .map_pfn = gart_map_pfn, + .alloc_domain = gart_alloc_domain, + .suspend = gart_suspend, + .resume = gart_resume, +}; + +static struct platform_driver tegra_iovmm_gart_drv = { + .probe = gart_probe, + .remove = gart_remove, + .driver = { + .name = DRIVER_NAME, + }, +}; + +static int gart_suspend(struct tegra_iovmm_device *dev) +{ + struct gart_device *gart = container_of(dev, struct gart_device, iovmm); + unsigned int i; + unsigned long reg; + + if (!gart) + return -ENODEV; + + if (!gart->enable) + return 0; + + spin_lock(&gart->pte_lock); + reg = gart->iovmm_base; + for (i=0; i<gart->page_count; i++) { + writel(reg, gart->regs + GART_ENTRY_ADDR); + gart->savedata[i] = readl(gart->regs + GART_ENTRY_DATA); + dmb(); + reg += 1 << GART_PAGE_SHIFT; + } + spin_unlock(&gart->pte_lock); + return 0; +} + +static void do_gart_setup(struct gart_device *gart, const u32 *data) +{ + unsigned long reg; + unsigned int i; + + writel(1, gart->regs + GART_CONFIG); + + reg = gart->iovmm_base; + for (i=0; i<gart->page_count; i++) { + writel(reg, gart->regs + GART_ENTRY_ADDR); + writel((data) ? data[i] : 0, gart->regs + GART_ENTRY_DATA); + wmb(); + reg += 1 << GART_PAGE_SHIFT; + } + wmb(); +} + +static void gart_resume(struct tegra_iovmm_device *dev) +{ + struct gart_device *gart = container_of(dev, struct gart_device, iovmm); + + if (!gart || !gart->enable || (gart->enable && !gart->savedata)) + return; + + spin_lock(&gart->pte_lock); + do_gart_setup(gart, gart->savedata); + spin_unlock(&gart->pte_lock); +} + +static int gart_remove(struct platform_device *pdev) +{ + struct gart_device *gart = platform_get_drvdata(pdev); + + if (!gart) + return 0; + + if (gart->enable) + writel(0, gart->regs + GART_CONFIG); + + gart->enable = 0; + platform_set_drvdata(pdev, NULL); + tegra_iovmm_unregister(&gart->iovmm); + if (gart->savedata) + vfree(gart->savedata); + if (gart->regs) + iounmap(gart->regs); + kfree(gart); + return 0; +} + +static int gart_probe(struct platform_device *pdev) +{ + struct gart_device *gart = NULL; + struct resource *res, *res_remap; + void __iomem *gart_regs = NULL; + int e; + + if (!pdev) { + pr_err(DRIVER_NAME ": platform_device required\n"); + return -ENODEV; + } + + if (PAGE_SHIFT != GART_PAGE_SHIFT) { + pr_err(DRIVER_NAME ": GART and CPU page size must match\n"); + return -ENXIO; + } + + /* the GART memory aperture is required */ + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + res_remap = platform_get_resource(pdev, IORESOURCE_MEM, 1); + + if (!res || !res_remap) { + pr_err(DRIVER_NAME ": GART memory aperture expected\n"); + return -ENXIO; + } + gart = kzalloc(sizeof(*gart), GFP_KERNEL); + if (!gart) { + pr_err(DRIVER_NAME ": failed to allocate tegra_iovmm_device\n"); + e = -ENOMEM; + goto fail; + } + + gart_regs = ioremap_wc(res->start, res->end - res->start + 1); + if (!gart_regs) { + pr_err(DRIVER_NAME ": failed to remap GART registers\n"); + e = -ENXIO; + goto fail; + } + + gart->iovmm.name = VMM_NAME; + gart->iovmm.ops = &tegra_iovmm_gart_ops; + gart->iovmm.pgsize_bits = GART_PAGE_SHIFT; + spin_lock_init(&gart->pte_lock); + + platform_set_drvdata(pdev, gart); + + e = tegra_iovmm_register(&gart->iovmm); + if (e) goto fail; + + e = tegra_iovmm_domain_init(&gart->domain, &gart->iovmm, + (tegra_iovmm_addr_t)res_remap->start, + (tegra_iovmm_addr_t)res_remap->end+1); + if (e) goto fail; + + gart->regs = gart_regs; + gart->iovmm_base = (tegra_iovmm_addr_t)res_remap->start; + gart->page_count = res_remap->end - res_remap->start + 1; + gart->page_count >>= GART_PAGE_SHIFT; + + gart->savedata = vmalloc(sizeof(u32)*gart->page_count); + if (!gart->savedata) { + pr_err(DRIVER_NAME ": failed to allocate context save area\n"); + e = -ENOMEM; + goto fail; + } + + spin_lock(&gart->pte_lock); + + do_gart_setup(gart, NULL); + gart->enable = 1; + + spin_unlock(&gart->pte_lock); + return 0; + +fail: + if (gart_regs) + iounmap(gart_regs); + if (gart && gart->savedata) + vfree(gart->savedata); + if (gart) + kfree(gart); + return e; +} + +static int __devinit gart_init(void) +{ + return platform_driver_register(&tegra_iovmm_gart_drv); +} + +static void __exit gart_exit(void) +{ + return platform_driver_unregister(&tegra_iovmm_gart_drv); +} + +#define GART_PTE(_pfn) (0x80000000ul | ((_pfn)<<PAGE_SHIFT)) + + +static int gart_map(struct tegra_iovmm_device *dev, + struct tegra_iovmm_area *iovma) +{ + struct gart_device *gart = container_of(dev, struct gart_device, iovmm); + unsigned long gart_page, count; + unsigned int i; + + gart_page = iovma->iovm_start; + count = iovma->iovm_length >> GART_PAGE_SHIFT; + + for (i=0; i<count; i++) { + unsigned long pfn; + + pfn = iovma->ops->lock_makeresident(iovma, i<<PAGE_SHIFT); + if (!pfn_valid(pfn)) + goto fail; + + spin_lock(&gart->pte_lock); + + writel(gart_page, gart->regs + GART_ENTRY_ADDR); + writel(GART_PTE(pfn), gart->regs + GART_ENTRY_DATA); + wmb(); + gart_page += 1 << GART_PAGE_SHIFT; + + spin_unlock(&gart->pte_lock); + } + wmb(); + return 0; + +fail: + spin_lock(&gart->pte_lock); + while (i--) { + iovma->ops->release(iovma, i<<PAGE_SHIFT); + gart_page -= 1 << GART_PAGE_SHIFT; + writel(gart_page, gart->regs + GART_ENTRY_ADDR); + writel(0, gart->regs + GART_ENTRY_DATA); + wmb(); + } + spin_unlock(&gart->pte_lock); + wmb(); + return -ENOMEM; +} + +static void gart_unmap(struct tegra_iovmm_device *dev, + struct tegra_iovmm_area *iovma, bool decommit) +{ + struct gart_device *gart = container_of(dev, struct gart_device, iovmm); + unsigned long gart_page, count; + unsigned int i; + + count = iovma->iovm_length >> GART_PAGE_SHIFT; + gart_page = iovma->iovm_start; + + spin_lock(&gart->pte_lock); + for (i=0; i<count; i++) { + if (iovma->ops && iovma->ops->release) + iovma->ops->release(iovma, i<<PAGE_SHIFT); + + writel(gart_page, gart->regs + GART_ENTRY_ADDR); + writel(0, gart->regs + GART_ENTRY_DATA); + wmb(); + gart_page += 1 << GART_PAGE_SHIFT; + } + spin_unlock(&gart->pte_lock); + wmb(); +} + +static void gart_map_pfn(struct tegra_iovmm_device *dev, + struct tegra_iovmm_area *iovma, tegra_iovmm_addr_t offs, + unsigned long pfn) +{ + struct gart_device *gart = container_of(dev, struct gart_device, iovmm); + + BUG_ON(!pfn_valid(pfn)); + spin_lock(&gart->pte_lock); + writel(offs, gart->regs + GART_ENTRY_ADDR); + writel(GART_PTE(pfn), gart->regs + GART_ENTRY_DATA); + wmb(); + spin_unlock(&gart->pte_lock); + wmb(); +} + +static struct tegra_iovmm_domain *gart_alloc_domain( + struct tegra_iovmm_device *dev, struct tegra_iovmm_client *client) +{ + struct gart_device *gart = container_of(dev, struct gart_device, iovmm); + return &gart->domain; +} + +module_init(gart_init); +module_exit(gart_exit); |