Age | Commit message (Collapse) | Author |
|
regmap_config.reg_stride is introduced. All extant register addresses
are a multiple of this value. Users of serial-oriented regmap busses will
typically set this to 1. Users of the MMIO regmap bus will typically set
this based on the value size of their registers, in bytes, so 4 for a
32-bit register.
Throughout the regmap code, actual register addresses are used. Wherever
the register address is used to index some array of values, the address
is divided by the stride to determine the index, or vice-versa. Error-
checking is added to all entry-points for register address data to ensure
that register addresses actually satisfy the specified stride. The MMIO
bus ensures that the specified stride is large enough for the register
size.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
(cherry picked from commit f01ee60fffa4dc6c77122121233a793f7f696e67)
Change-Id: I634977dcb0fe9ff95c7932e9195a2c1918eb1c18
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/96510
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
|
|
Some bus types have very fast IO. For these, acquiring a mutex for every
IO operation is a significant overhead. Allow busses to indicate their IO
is fast, and enhance regmap to use a spinlock for those busses.
[Currently limited to native endian registers -- broonie]
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
(cherry picked from commit bacdbe077342ecc9e7b3e374cc5a41995116706a)
Change-Id: I337f27a09f2b176b46e8f6d05401957a1abc0609
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/96503
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
|
|
If there are no nodes in the cache, nodes will be 0, so calculating
"registers / nodes" will cause division by zero.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Cc: stable@vger.kernel.org
(cherry picked from commit c04c1b9ee8f30c7a3a25e20e406247003f634ebe)
Change-Id: I30047f1e0bd08417794c3d19e2fe1d480a74e083
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/96501
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
|
|
The code currently passes the register offset in the current block to
regcache_lookup_reg. This works fine as long as there is only one block and with
base register of 0, but in all other cases it will look-up the default for a
wrong register, which can cause unnecessary register writes. This patch fixes
it by passing the actual register number to regcache_lookup_reg.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Cc: <stable@vger.kernel.org>
(cherry picked from commit 4b4e9e43fd210e0cd2a5d29357e7c000e13e08ae)
Change-Id: Ibed70828471423df5432fea67316ca9ad8aeb52a
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/96497
Reviewed-by: Automatic_Commit_Validation_User
|
|
Otherwise we'll end up running with bogus register numbers.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
(cherry picked from commit f9353e70bcebd00cd182d946083afd7d8eddd259)
Change-Id: I7615fd2d63ec29dd869585fb20a151067b53c72a
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/96494
Reviewed-by: Automatic_Commit_Validation_User
|
|
Most of the current users have register 0 as a volatile register or don't
have a register 0 so it's not been apparent that it's not getting synced.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
(cherry picked from commit 994f5db65ef4b83db0321842bd43c6bc0a51f000)
Change-Id: Iaada5ae8ba705f45049eb1c85a1909a9f192f765
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/96493
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
|
|
In order to allow us to support partial sync operations add minimum and
maximum register arguments to the sync operation and update the rbtree
and lzo caches to use this new information. The LZO implementation is
obviously not good, we could exit the iteration earlier, but there may
be room for more wide reaching optimisation there.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
(cherry picked from commit ac8d91c801905a061ca883dca427a5e19602a1e7)
Change-Id: I92ceee1c704ea7c864bff0559d36cf34554c3ba5
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/96489
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
|
|
The debugfs functions don't stub themselves out quite so well as might
be desirable so provide functions which do do this stubbing.
Reported-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
(cherry picked from commit cce585ce1ebd5307c9709e24758d5eb8a1e087a7)
Change-Id: I98580d938816e547c0b3c006be93facf38a29965
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/87569
|
|
Show the register ranges we have in each rbtree node in debugfs, plus
some statistics on how big each node is and the total number of nodes.
It may also be worth collecting data on the ranges of dirty registers
to see if there's much mileage in trying to coalesce writes on sync.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
(cherry picked from commit bad2ab4b6d938482c2b0bdcf80a8d14dbef4e8f5)
Change-Id: Ib1f6b29c703b2324fea0d93db5c3d92be6bdee22
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/87568
|
|
Calling regcache_exit from regcache_rbtree_init is first of all a layering
violation and secondly will cause double frees. regcache_exit will free buffers
allocated by the core, but the core will also free the same buffers when the
cacheops init callback returns an error. Thus we end up with a double free.
Fix this by not calling regcache_exit but only free those buffers which, have
been allocated in this function.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
(cherry picked from commit 462a185c5cea7063348003c1644b70a6f6780f01)
Change-Id: Ib3e955599995543d2948d3f85ca46d648fee6bd9
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/87557
|
|
Simplify the check for registers set at their default value by avoiding
picking a default value in the case where we don't have one. Instead we
only compare the current value to the current value when we looked one
up. This fixes the case where we don't have a default stored but the value
was set to zero when that isn't the chip default.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
|
|
Ensure that when we start up in cache only mode we can store defaults of
zero, otherwise if the hardware is unavailable we won't be able to read.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
|
|
If a register isn't cached then let callers know that so they can fall
back or error handle appropriately.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
|
|
Signed-off-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
|
|
Move the handling of the cached rbnode into regcache_rbtree_lookup. This allows
us to remove of some duplicated code sections in regcache_rbtree_read and
regcache_rbtree_write.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
|
|
Use regcache_{set,get}_val in regcache_rbtree_{set,get}_register instead of
re-implementing its functionality.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
|
|
Signed-off-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
|
|
This patch adds support for the rbtree cache compression type.
Each rbnode manages a variable length block of registers. There can be no
two nodes with overlapping blocks. Each block has a base register and a
currently top register, all the other registers, if any, lie in between these
two and in ascending order.
The reasoning behind the construction of this rbtree is simple. In the
snd_soc_rbtree_cache_init() function, we iterate over the register defaults
provided by the regcache core. For each register value that is non-zero we
insert it in the rbtree. In order to determine in which rbnode we need
to add the register, we first look if there is another register already
added that is adjacent to the one we are about to add. If that is the case
we append it in that rbnode block, otherwise we create a new rbnode
with a single register in its block and add it to the tree.
There are various optimizations across the implementation to speed up lookups
by caching the most recently used rbnode.
Signed-off-by: Dimitris Papastamos <dp@opensource.wolfsonmicro.com>
Tested-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
|