Age | Commit message (Collapse) | Author |
|
iovmm_split_free_block leaves the domain's spinlock unlocked if a
memory allocation failed. Unfortunately, all the callers of that
function assume that it takes the spinlock. This will then lead to
double unlocking of the spinlock.
Bug 1035105
Change-Id: Ib4379cad76f053586d6a77b8d0dc9f41af01931a
Signed-off-by: Tuomas Tynkkynen <ttynkkynen@nvidia.com>
Reviewed-on: http://git-master/r/124299
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
|
|
Allow tegra_iovmm_alloc_client() to take struct device * instead of
const char *name w/ __tegra_iovmm_alloc_client(). This is necessary to
support IOVMM and IOMMU simultaneously.
Change-Id: I18df5001bfe0ece8f9f15b636eb11def9f228dfb
Signed-off-by: Hiroshi DOYU <hdoyu@nvidia.com>
Reviewed-on: http://git-master/r/114215
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
|
|
Use all capital letters for MACRO
Signed-off-by: Hiroshi DOYU <hdoyu@nvidia.com>
Reviewed-on: http://git-master/r/66371
(cherry picked from commit 48100a354bf4ec447c1b0eefa968322894907bca)
Change-Id: I3429b68bcc9228a4b74b50c95156f8df1c1c19a0
Signed-off-by: Pritesh Raithatha <praithatha@nvidia.com>
Reviewed-on: http://git-master/r/79990
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
Replaced iovmm_align_{up,down} with round_{up,down}
Reviewed-on: http://git-master/r/66369
Change-Id: Ie0e2b8b97c57ae3addcfe63968d00f6937cbc7d8
Signed-off-by: Hiroshi DOYU <hdoyu@nvidia.com>
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Reviewed-on: http://git-master/r/78698
Reviewed-by: Automatic_Commit_Validation_User
|
|
iovmprint() doesn't do much. Use standard snprintf() directly
Change-Id: I437dc6767e9984ad72c2d4adbb7b389b3f32d10f
Signed-off-by: Hiroshi DOYU <hdoyu@nvidia.com>
Reviewed-on: http://git-master/r/66370
Reviewed-by: Rohan Somvanshi <rsomvanshi@nvidia.com>
Tested-by: Rohan Somvanshi <rsomvanshi@nvidia.com>
|
|
- Fixed checkpatch.pl --strict errors.
- Inserted one space around binary operators
From Documentation/CodingStyle "3.1: Spaces".
- Removed a file path line in the head of file.
- Updated Copyright year
- Removed duplicated header inclusions
Change-Id: I750e31cf6e90a9f36e707a6278da6137e1a8ba05
Signed-off-by: Hiroshi DOYU <hdoyu@nvidia.com>
Reviewed-on: http://git-master/r/66351
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
|
|
Change-Id: I04155e54a1aaea950921990c31c3a7a3a45ccaf1
Reviewed-on: http://git-master/r/62017
Tested-by: Gerrit_Virtual_Submit
Tested-by: Jeff Smith <jsmith@nvidia.com>
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Rebase-Id: Rd7f272d471e109faf9e82cfa039aed22d2f7b801
|
|
Bug 862658
Reverting to K36 code.
Google invented the lock (originally as spinlock). The lock surrounds
a read-only list and is not only unnecessary but also affects
atomicity of lower layers. Should real protection be necessary in
a lower layer, the layer would be responsible for the proetection.
The SMMU layer does not need protection.
Change-Id: I45641cfde3eebc536f57d14db447b03f4ea814b5
Reviewed-on: http://git-master/r/50349
Tested-by: Hiro Sugawara <hsugawara@nvidia.com>
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Rebase-Id: R9c0fdcdbc673c0ba353a8d0731dcf4e4bf3a81fa
|
|
Bug 862658
spinlock is not only overkill for read-only list protection, but also
affects lower layer's atomicity.
Change-Id: I27688c9890f888068f4ffc7b9d22eacb19c946e8
Reviewed-on: http://git-master/r/49092
Reviewed-by: Daniel Willemsen <dwillemsen@nvidia.com>
Tested-by: Daniel Willemsen <dwillemsen@nvidia.com>
Rebase-Id: R251ec43706e88ee64e201b386fe4b35cbe41f1ba
|
|
Change-Id: Ic2cecccf0f4f6e6ca612af2ee07acdbca2ce07a5
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Reviewed-on: http://git-master/r/49281
Reviewed-by: Daniel Willemsen <dwillemsen@nvidia.com>
Tested-by: Daniel Willemsen <dwillemsen@nvidia.com>
Rebase-Id: R59e04e0a46099403284a036de7f35d21c6188d81
|
|
Fix GART lockups caused by fragmentation by evicting
mapped areas from iovm space after unsuccessful array
pinning attempt.
Fix double unpin error happening during interrupted
submit.
Fix possible sleep in atomic context in iovmm code
(semaphore inside spinlock) by replacing spinlock
with mutex.
Fix race between handle_unpin and pin_handle.
bug 838579
bug 838073
bug 818058
Original-Change-Id: I420447ffb4e02fb78a7987e22a537eefc16ff524
Reviewed-on: http://git-master/r/36129
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
Rebase-Id: R893c97003f2ec2f69e224f35d99d3488f673d620
|
|
Original-Change-Id: I95cdf71e74947d4394e0cfd272a29c47562d4059
Reviewed-on: http://git-master/r/31648
Reviewed-by: Niket Sirsi <nsirsi@nvidia.com>
Tested-by: Niket Sirsi <nsirsi@nvidia.com>
Rebase-Id: Rab2b1c591569698c450709be470185b0d2fe9df4
|
|
Adding a build time CONFIG option to enable forcing of conversion
of non-IRAM CarveOut memory allocation requests to IOVM requests.
Default is "y" to force the conversion.
Each forced conversion is reported to console.
Allocation alignments larger than page size for IOVM are enabled.
Single page CarveOut allocations are converted to system memory.
CarveOut memory reservation has been removed for aruba, cardhu,
and enterprise.
Original-Change-Id: I3a598431d15b92ce853b3bec97be4b583d021264
Reviewed-on: http://git-master/r/29849
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
Rebase-Id: Reaf79d5ef57d3be8f2dc572de1919e854a117114
|
|
Bug 764354
Original-Change-Id: I8a390eb4dae87dceacb97461f23d13554868b046
Reviewed-on: http://git-master/r/12228
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Tested-by: Scott Williams <scwilliams@nvidia.com>
Original-Change-Id: I8e6b8303898796419fb5a759cd16edff9aeac081
Rebase-Id: R2866240384c6c24f46bd7ef54bc3dc9140d9e96b
|
|
Also convert lock used in suspend to spinlock.
Change-Id: Icedac130e7b5e9a69df6f16779ef8d53efe424f1
Signed-off-by: Colin Cross <ccross@android.com>
|
|
The Tegra IOVMM is an interface to allow device drivers and subsystems in
the kernel to manage the virtual memory spaces visible to I/O devices.
The interface has been designed to be scalable to allow for I/O virtual
memory hardware which exists in one or more limited apertures of the address
space (e.g., a small aperture in physical address space which can perform
MMU-like remapping) up to complete virtual addressing with multiple
address spaces and memory protection.
The interface has been designed to be similar to the Linux virtual memory
system; however, operations which would be difficult to implement or
nonsensical for DMA devices (e.g., copy-on-write) are not present, and
APIs have been added to allow for management of multiple simultaneous
active address spaces.
The API is broken into four principal objects: areas, clients, domains and
devices.
Areas
=====
An area is a contiguous region of the virtual address space which can be
filled with virtual-to-physical translations (and, optionally, protection
attributes). The virtual address of the area can be queried and used for
DMA operations by the client which created it.
As with the Linux vm_area structures, it is the responsibility of whichever
code creates an area to ensure that it is populated with appropriate
translations.
Domains
=======
A domain in the IOVMM system is similar to a process in a standard CPU
virtual memory system; it represents the entire range of virtual addresses
which may be allocated and used for translation. Depending on hardware
capabilities, one or more domains may be resident and available for
translation. IOVMM areas are allocated from IOVMM domains.
Whenever a DMA operation is performed to or from an IOVMM area, its parent
domain must be made resident prior to commencing the operation.
Clients
=======
I/O VMM clients represent any entity which needs to be able to allocate
and map system memory into I/O virtual space. Clients are created by name
and may be created as part of a "share group," where all clients created
in the same share group will observe the same I/O virtual space (i.e., all
will use the same IOVMM domain). This is similar to threads inside a process
in the CPU virtual memory manager.
The callers of the I/O VMM system are responsible for deciding on the
granularity of client creation and share group definition; depending on the
specific usage model expected by the caller, it may be appropriate to create
an IOVMM client per task (if the caller represents an ioctl'able interface
to user land), an IOVMM client per driver instance, a common IOVMM client
for an entire bus, or a global IOVMM client for an OS subsystem (e.g., the DMA
mapping interface).
Each client is responsible for ensuring that its IOVMM client's translation is
resident on the system prior to performing DMA operations using the IOVMM
addresses. This is accomplished by preceding all DMA operations for the client
with a call to tegra_iovmm_client_lock (or tegra_iovmm_client_trylock),
and following all operations (once complete) with a call to
tegra_iovmm_client_unlock. In this regard, clients are cooperatively context-
switched, and are expected to behave appropriately.
Devices
=======
I/O VMM devices are the physical hardware which is responsible for performing
the I/O virtual-to-physical translation.
Devices are responsible for domain management: the mapping and unmapping
operations needed to make translations resident in the domain (including
any TLB shootdown or cache invalidation needed to ensure coherency), locking
and unlocking domains as they are made resident by clients into the devices'
address space(s), and allocating and deallocating the domain objects.
Devices are responsible for the allocation and deallocation of domains to
allow coalescing of multiple client share groups into a single domain. For
example, if the device's hardware only allows a single address space to
be translated system-wide, performing full flushes and invalidates of the
translation at every client switch may be prohibitively expensive. In these
circumstances, a legal implementation of the IOVMM interface includes
returning the same domain for all clients on the system (regardless of
the originally-specified share group).
In this respect, a client can be assured that it will share an address space
with all of the other clients in its share group; however, it may also share
this address space with other clients, too.
Multiple devices may be present in a system; a device should return a NULL
domain if it is incapable of servicing the client when it is asked to
allocate a domain.
----------------------------------------------------------------------------
IOVMM Client API
================
tegra_iovmm_alloc_client - Called to create a new IOVMM client object; the
implementation may create a new domain or return an existing one depending on
both the device and the share group.
tegra_iovmm_free_client - Frees a client.
tegra_iovmm_client_lock - Makes a client's translations resident in the IOVMM
device for subsequent DMA operations. May block if the device is incapable
of context-switching the client when it is called. Returns -EINTR if the
waiting thread is interrupted before the client is locked.
tegra_iovmm_client_trylock - Non-blocking version of tegra_iovmm_client_lock
tegra_iovmm_client_unlock - Called by clients after DMA operations on IOVMM-
translated addresses is complete; allows IOVMM system to context-switch the
current client out of the device if needed.
tegra_iovmm_create_vm - Called to allocate an IOVMM area. If
lazy / demand-loading of pages is desired, clients should supply a pointer
to a tegra_iovmm_area_ops structure providing callback functions to load, pin
and unpin the physical pages which will be mapped into this IOVMM region.
tegra_iovmm_get_vm_size - Called to query the total size of an IOVMM client
tegra_iovmm_free_vm - Called to free a IOVMM area, releasing any pinned
physical pages mapped by it and to decommit any resources (memory for
PTEs / PDEs) required by the VM area.
tegra_iovmm_vm_insert_pfn - Called to insert an exact pfn (system memory
physical page) into the area at a specific virtual address. Illegal to call
if the IOVMM area was originally created with lazy / demand-loading.
tegra_iovmm_zap_vm - Called to mark all mappings in the IOVMM area as
invalid / no-access, but continues to consume the I/O virtual address space.
For lazy / demand-loaded IOVMM areas, a zapped region will not be reloaded
until it has been unzapped; DMA operations using the affected translations
may fault (if supported by the device).
tegra_iovmm_unzap_vm - Called to re-enable lazy / demand-loading of pages
for a previously-zapped IOVMM area.
tegra_iovmm_find_area_get - Called to find the IOVMM area object
corresponding to the specified I/O virtual address, or NULL if the address
is not allocated in the client's address space. Increases the reference count
on the IOVMM area object
tegra_iovmm_area_get - Called to increase the reference count on the IOVMM
area object
tegra_iovmm_area_put - Called to decrease the reference count on the IOVMM
area object
IOVMM Device API
================
tegra_iovmm_register - Called to register a new IOVMM device with the IOVMM
manager
tegra_iovmm_unregister - Called to remove an IOVMM device from the IOVMM
manager (unspecified behavior if called while a translation is active and / or
in-use)
tegra_iovmm_domain_init - Called to initialize all of the IOVMM manager's
data structures (block trees, etc.) after allocating a new domain
IOVMM Device HAL
================
map - Called to inform the device about a new lazy-mapped IOVMM area. Devices
may load the entire VM area when this is called, or at any time prior to
the completion of the first read or write operation using the translation.
unmap - Called to zap or to decommit translations
map_pfn - Called to insert a specific virtual-to-physical translation in the
IOVMM area
lock_domain - Called to make a domain resident; should return 0 if the
domain was successfully context-switched, non-zero if the operation can
not be completed (e.g., all available simultaneous hardware translations are
locked). If the device can guarantee that every domain it allocates is
always usable, this function may be NULL.
unlock_domain - Releases a domain from residency, allows the hardware
translation to be used by other domains.
alloc_domain - Called to allocate a new domain; allowed to return an
existing domain
free_domain - Called to free a domain.
Change-Id: Ic65788777b7aba50ee323fe16fd553ce66c4b87c
Signed-off-by: Gary King <gking@nvidia.com>
|