Age | Commit message (Collapse) | Author |
|
Remove the config variable usage from the kernel and make the secure
firmware check dynamic. This make LP1 resume tricky since we need to
execute out of TZRAM till SDRAM is out of self-refresh. To fix this,
store secure firmware presence bit in TZRAM during boot.
Bug 1475528
Change-Id: Ic18766bbee14626e8cf092363d57f4d98b44b6df
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Reviewed-on: http://git-master/r/377616
|
|
ARM defines PSCI interfaces to be used for power states. We have
been using the actual semantics for quite some time now and so
can remove our implementation of the SMC issuing code and use the
generic interfaces present in <arm/arm64>/kernel/psci.c.
Bug 1475528
Change-Id: Ieba8a0a54f5ee731626e7d92a767ef044e88f12d
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Reviewed-on: http://git-master/r/378354
|
|
This reverts commit a16d512e5b2a8d3ce9c4cc6e59a219b8c81b5164,
as we want to remove the dcache flush from the NS world
completely and piggy-back on the secure world's cache flush
operation to flush the NS cache lines.
Bug 1454640
Change-Id: Ia0973238f5203562f32fa785a19ecab5d8d920b5
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Reviewed-on: http://git-master/r/368268
|
|
non-secure mode."
This reverts commit 7f93a0dddf39f372c064f772f9af6903e91aaacf as
the t132ref builds break with the following errors -
<android>/kernel/drivers/platform/tegra/../../../arch/arm/mach-tegra/reset.c:45: undefined reference to `is_secure_mode'
<android>/kernel/drivers/platform/tegra/../../../arch/arm/mach-tegra/reset.c:57: undefined reference to `is_secure_mode'
<android>/kernel/drivers/platform/tegra/../../../arch/arm/mach-tegra/reset.c:58: undefined reference to `tegra_generic_smc'
Change-Id: I4e44c2ffba4e1c013213e543b67f2d49a928b764
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Reviewed-on: http://git-master/r/365347
|
|
- Remove CONFIG_TEGRA_USE_SECURE_KERNEL config option
- Use DBGDSCR.NS bit to dynamically get secure/non-secure mode
- Replace ifdefs with dynamic code.
- Keep CONFIG_TRUSTED_LITTLE_KERNEL to enable secure os
bug 1411345
Change-Id: I75ddfed7a35fcb30e2772bb43057ae022bcf09b3
Signed-off-by: Nitin Sehgal <nsehgal@nvidia.com>
Reviewed-on: http://git-master/r/353155
Reviewed-by: Varun Wadekar <vwadekar@nvidia.com>
Tested-by: Varun Wadekar <vwadekar@nvidia.com>
|
|
There's no point having the secure world flush the dcache
for us. This is more of a requirement from the NS world and
the chip, rather than the secure world.
Bug 1387322
Change-Id: I879e48347faac2a2b2e841e39b4c8830416c38be
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Reviewed-on: http://git-master/r/356339
(cherry picked from commit 415e0884bbf9194f6ff2e389b81dca9c376b33fd)
Reviewed-on: http://git-master/r/359651
|
|
Use SIP Service calls (0x82000000x) and Standard Service calls
(0x8400000x) from the DEN0028 spec.
PSCI says that we need to use 0x8400000x in r0 for any power
management features i.e. cpu idle/hotplug/on/off followed by the
actual cpu state (LP2/LP1/LP0) in r1. This translates to Std service
calls space mentioned in the DEN0028 spec.
The SIP service calls can be used by silicon partners for their CPU
specific settings. We use this SMC space for L2 settings and to set
the CPU reset vector.
SMCs that are interrupted return a special status code to the NS world.
Look for that status and send a restart SMC (value = 60 << 24) when
received.
Also removed save/restore of r4-r12 as we rely on the secure OS to
do this for us.
Change-Id: I6fae83cc96d29c23305177df770fa07f7970c383
Signed-off-by: Scott Long <scottl@nvidia.com>
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Reviewed-on: http://git-master/r/329998
|
|
LPAE allows physical addresses upto 40 bits.
Consequently, the layout of TTBR changes. This
change modifies the suspend pgtable init and
suspend code to support the increased address
range and register layout for LPAE.
Bug 1271462
Change-Id: I44015aba943e2972cc99559d957209a7d1c364c7
Signed-off-by: Prashant Malani <pmalani@nvidia.com>
Reviewed-on: http://git-master/r/215252
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com>
|
|
This new config would only be enabled when we enable a secure os
implementation. This config would be generic and we can reuse it
if/when we change the secure os vendor.
Change-Id: I94a0a365d4dc834fafa1137a0c0d9adf1b394c51
Signed-off-by: James Zhao <jamesz@nvidia.com>
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Reviewed-on: http://git-master/r/211756
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Chris Johnson <cwj@nvidia.com>
|
|
Perform L2 sync before disabling PL310.
Change-Id: I84b4fb3844f11e5f4a9752979bf413d2123282f6
Signed-off-by: Antti P Miettinen <amiettinen@nvidia.com>
Reviewed-on: http://git-master/r/213588
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
|
|
The ARM errata 799270 requires a data dependency between the returning
device load data and MCR instruction that sets ACTLR.SMP bit. Fix the
current workaround so it confirms to errata document.
bug 1195192
Change-Id: Ideeb3dd3d865323d59ae4bc7a2d40889acfe379d
Signed-off-by: Bo Yan <byan@nvidia.com>
Reviewed-on: http://git-master/r/211812
(cherry picked from commit 6b738d1059962d80857b09d70a8878915f17c39e)
Reviewed-on: http://git-master/r/213133
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
|
|
Let's not clobber aux register value before checking it.
Change-Id: I941966a417d58d100acc14430c87f31c27766765
Signed-off-by: Antti P Miettinen <amiettinen@nvidia.com>
Reviewed-on: http://git-master/r/212462
Reviewed-by: Bo Yan <byan@nvidia.com>
|
|
As page tables can be outer cacheable we want to keep L2
available while MMU is on. Therefore, upon resuming from power
gating, enable L2 before MMU enable and upon power gating entry
disable L2 after MMU has been disabled. The optimization
is not stable with secure OS so leave the optimization out
for secure OS config. T148 has separate caches so there L2 flush
cannot be avoided. Also the caches are of different size so
the l2x0 module is initialized upon resume.
Bug 1046695
Change-Id: I520db89e880c08113e0b3e29a88efaad0c100045
Signed-off-by: Antti P Miettinen <amiettinen@nvidia.com>
Reviewed-on: http://git-master/r/204852
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
|
|
memory and instruction barriers are needed after the tlb is
invalidated and BTAC is flushed as per ARM TRM. Without this
there is a invalid page translation in some cases.
Bug 1189280
Reviewed-on: http://git-master/r/195070
(cherry picked from commit 997c54686349728cdf54cfeae96b5f4078ccb436)
Change-Id: I85e297ffd9245c5066f656bbb70ea257b8b3b317
Signed-off-by: Amit Kamath <akamath@nvidia.com>
Reviewed-on: http://git-master/r/199867
Reviewed-by: Automatic_Commit_Validation_User
Tested-by: Sarvesh Satavalekar <ssatavalekar@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
This change removes deplicated L2 cache flush.
Moves SMC(0xFFFFFFE4) a bit later in the PM entry process,
replacing tegra_flush_cache in tegra_sleep_cpu_finish().
Bug 1195365
(cherry picked from commit aded133982f0d54e0be3446b8d15c185aa352aac)
Reviewed-on: http://git-master/r/#change,190431
Change-Id: I876ad1d8322f571d7c8561cea83bbf22915d01d8
Signed-off-by: Hyung Taek Ryoo <hryoo@nvidia.com>
Reviewed-on: http://git-master/r/190451
Reviewed-by: Mrutyunjay Sawant <msawant@nvidia.com>
Tested-by: Mrutyunjay Sawant <msawant@nvidia.com>
|
|
Fix BOND_OUT_L register access to use the right offset.
Change-Id: I0ccc2adc6aaef7e542436e2c4d65994c59a5a2d3
Signed-off-by: Antti P Miettinen <amiettinen@nvidia.com>
Reviewed-on: http://git-master/r/192407
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Bo Yan <byan@nvidia.com>
Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com>
|
|
Do an external device read to start L2 clock, then change
SMP bit in ACTLR. The ACTLR change needs to be done immediately
after the device read is done since there are only 256 clock cycles
maximum available before the L2 clock can be gated again.
bug 1208654
bug 1195192
Change-Id: Ide1c0476d629cbea07f585013ed3b7e79a67c86e
Signed-off-by: Bo Yan <byan@nvidia.com>
Reviewed-on: http://git-master/r/187521
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Bobby Meeker <bmeeker@nvidia.com>
|
|
Function "cpu_do_idle" is defined in ARM common code, there is no need
for "tegra_cpu_wfi" which has the identical implementation.
Change-Id: I8ca3ada171990148162276a76434aebd2bd188e2
Signed-off-by: Bo Yan <byan@nvidia.com>
Reviewed-on: http://git-master/r/159157
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
Rebase-Id: Rd6a6fc2bab5af491f65b018cc4bd4cecfdd2b60b
|
|
With this revert, Dalmore enters LP0 state in Main
Otherwise NULL exception is encountered (variable l2x0_base)
Revert is required till we get proper secureos code and we
ensure that T114 does not enter l2x0 code
This reverts commit 7274dfdea8e1512b863438d4f34074a67b5b4a97.
Change-Id: Ib3ff4f1664fdc1693c2768eb3ecc0205a456c982
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/145288
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Rebase-Id: R147929cee9916146be1ce2a0f22895afd5d3622f
|
|
1. for secondary CPU, always flush L1 only, this is irrespective of
Cortex A9 or Cortex A15
2. disable cache before flushing it when rail-gating CPU0
3. do not flush cache before entering ARM common code cpu_suspend,
which by itself will flush cache.
Still, it's highly desirable to flush cache in __cpu_suspend_save,
since this will flush L2 irrespective of A9 or A15.
Reviewed-on: http://git-master/r/133945
Change-Id: I2c6eb20546b5fc8b5432dc73c2f97480cbf29ee8
Signed-off-by: Bo Yan <byan@nvidia.com>
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/143126
Reviewed-by: Rohan Somvanshi <rsomvanshi@nvidia.com>
Tested-by: Rohan Somvanshi <rsomvanshi@nvidia.com>
Rebase-Id: R7aa2a54eb7cb87f99452f44f2a14a559e171cbda
|
|
During power gating we need to make sure that all state is
properly flushed to ungated part of the chip. To ensure
that data cache is completely cleaned after flush, the
cache needs to be disabled before flush. When data cache
is disabled we naturally cannot write to cacheable memory.
Therefore handle the disable inside the flush function.
Bug 1045096
Change-Id: I740ffdfd43c4b75bf58aaad4279092040a8c7405
Signed-off-by: Antti P Miettinen <amiettinen@nvidia.com>
Reviewed-on: http://git-master/r/133799
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
Rebase-Id: R4496004d2a32b2dfda731c77502a9489c0eb6b08
|
|
For the CONFIG_TRUSTED_FOUNDATION code paths, differentiate L2
enable vs. reenable, which are different SMCs (won't trigger an
invalidate in the case of a reenable).
On an L2 disable SMC, optionally pass a 0 for the L2 ways arg,
which skips the full clean/invalidate (and simply just disabled
the L2).
In order to safely skip flushing the L2 on the disable, we have
to be careful what we dirty from the type we flush the L1 and
disable the L2.
Reviewed-on: http://git-master/r/119786
Original-author: Chris Johnson <cwj@nvidia.com>
Change-Id: Iebcf1042ce2b58513e40e9d49f87ecec9dfdd301
Signed-off-by: Chris Johnson <cwj@nvidia.com>
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Reviewed-on: http://git-master/r/130061
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
Rebase-Id: R4dde3b2e285d5917bdba15a318ac18702eb59c90
|
|
This reverts commit 7ac85a9d58b51352605c845a0066c949c0c85f72.
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Rebase-Id: R5fe5ed9d55ec2405b1e869d1e10342702fe1b95b
|
|
For the CONFIG_TRUSTED_FOUNDATION code paths, differentiate L2
enable vs. reenable, which are different SMCs (won't trigger an
invalidate in the case of a reenable).
On an L2 disable SMC, optionally pass a 0 for the L2 ways arg,
which skips the full clean/invalidate (and simply just disabled
the L2).
In order to safely skip flushing the L2 on the disable, we have
to be careful what we dirty from the type we flush the L1 and
disable the L2.
Bug 939415
Signed-off-by: Chris Johnson<cwj@nvidia.com>
Change-Id: I756d2ceda83d5d8d6bc5670218e9d874d5e5f62a
Reviewed-on: http://git-master/r/119786
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
Rebase-Id: R3ef57b700f11d16ca5821194ab8144fd97a9fb47
|
|
The field ENABLE_EXT in CSR register controls what power partition
to be gated. If it's CPU-partition power gating only, there is no
need to flush or invalidate L2 cache before/after power gating.
With this change, L2 cache is flushed/invalidated only when the
non-CPU partition is to be power gated or when rail gating is
selected.
Change-Id: I6be522de694117a058eedc9584f2157d89f99dc4
Signed-off-by: Bo Yan <byan@nvidia.com>
Reviewed-on: http://git-master/r/103476
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Mark Stadler <mastadler@nvidia.com>
Reviewed-by: Jin Qian <jqian@nvidia.com>
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Rebase-Id: R3108cb94a1efc64574ff58067e239bd8539e6059
|
|
The function flush_cache_all flushes all caches within level of
coherency. For CortexA9, this is ok since only L1 is defined. For
CortexA15, it will flush both L1 and L2, this behavior is not
desired when there is no need to touch L2. So a new function is
defined to just flush L1 cache.
Change-Id: Id5a651770b70496d0dde6e90b226a19df90a57d0
Signed-off-by: Bo Yan <byan@nvidia.com>
Reviewed-on: http://git-master/r/102682
Reviewed-by: Mark Stadler <mastadler@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
GVS: Gerrit_Virtual_Submit
Rebase-Id: R06daabe836a97de0f4cace26235bf06ffbd49501
|
|
Rebase-Id: R940fad74c7e91ef3d1d3d589a48064ccb7335541
|
|
Add CONFIG_TRUSTED_FOUNDATIONS build option and calls to issue
SMCs to the TL secure monitor (used when needing to update state
not writable by non-secure code).
Make security/tf_driver an optional part of the build, which is
part of the TL framework to interact with secure services.
Bug 883391
Change-Id: I9c6c14ff457fb3a0c612d558fe731a17c2480750
Signed-off-by: Chris Johnson <cwj@nvidia.com>
Reviewed-on: http://git-master/r/65616
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
Rebase-Id: R57977499bb6b372ac4faa360e442e8733265e9f3
|
|
The current kernel methodology expects that tegra_cpu_suspend
is actually the last function in the entire suspend sequence.
In order to achieve this, the code needs to be remodelled a
bit so that we actually execute native cpu_suspend at the end
of the suspend sequence. This allows us to leverage all the
cpu_suspend code developed by ARM in the upstream kernels.
Bug 934368
Change-Id: I94172d7adaa54c10043c479a57b270925d85a16b
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Reviewed-on: http://git-master/r/84481
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
Rebase-Id: R15682d1d82f341338a2dd20c3083b66b9325bf7d
|
|
Bug 934368
Change-Id: Ic9d75cbb0c324b1858b2e476e33dd4f96349bce3
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Reviewed-on: http://git-master/r/86351
Reviewed-by: Simone Willett <swillett@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
Rebase-Id: Rb4fb04a26bc05a9649d17a3be8956d18998acc25
|
|
Can't use NR_CPUS on non-SMP systems. Just use the maximum.
Change-Id: I00b455adf950869146dfcd176efe4abdbe7aa24e
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Reviewed-on: http://git-master/r/87416
Reviewed-by: Aleksandr Frid <afrid@nvidia.com>
Reviewed-by: Varun Wadekar <vwadekar@nvidia.com>
Rebase-Id: Rd38f56587bd586144b67680d3e6c595d5f6b3def
|
|
The cpu suspend-resume code now duplicates the non-tegra
part from the native ARM code.
Bug 934368
Change-Id: I100c8de8e107d1baebb6ec30a1f6f77bca8f44aa
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Reviewed-on: http://git-master/r/83098
Rebase-Id: R84c5cc310386966c4f31e1149c9065602d1bc1ef
|
|
Make cpu_suspend()..return function preserve r4 to r11 across a suspend
cycle. This is in preparation of relieving platform support code from
this task.
Original commit: 5fa94c812c0001ac7c3d8868e956ec514734a352
Bug 911002
Change-Id: If33c32ba7de449288eac8f83cb0898ba77a46333
Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Mayuresh Kulkarni <mkulkarni@nvidia.com>
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Rebase-Id: R79c9865c168b8fde6a02b1ddce1bd98400e19161
|
|
Move the return address for cpu_resume to the top of stack so that
cpu_resume looks more like a normal function.
Original commit: 2fefbcd58590cf33189c6178098e12b31b994b5f
Bug 911002
Change-Id: I275930306a3b4ecb551a32da5f9f26dba53459ec
Acked-by: Frank Hofmann <frank.hofmann@tomtom.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Mayuresh Kulkarni <mkulkarni@nvidia.com>
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Rebase-Id: R54ebcedde6a84a538f44bcec759af88fef0abe4c
|
|
Bug 901430
Bug 905813
Change-Id: Id57f870262eebe6a2017b808d1a66624f903989d
Reviewed-on: http://git-master/r/64103
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
Rebase-Id: Rc3cad5fafa9e62fa10099bc4dc1281954a04b8f5
|
|
PL310 virtual address was calculated using PPSB virtual/phy address.
It should be done using CPU virtual/phy address. This causes
TEGRA_PL310_VIRT value to get overlapped with virtual kerenl memory map's
Vmalloc region on whistler.
Bug 881831
Bug 867094
Change-Id: Ifaeeb9291553af59453f0041ad7cb1fe9d27979b
Signed-off-by: Puneet Saxena <puneets@nvidia.com>
Signed-off-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Reviewed-on: http://git-master/r/62097
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: Mayuresh Kulkarni <mkulkarni@nvidia.com>
Rebase-Id: Ra5a6165c8a02f0ac130bbaac4a477b901ceea62f
|
|
Can't use NR_CPUS on non-SMP systems. Just use the maximum.
Change-Id: Ie0d6289c3b8bdaada6335e4670c9f6b5ab2bcc93
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Reviewed-on: http://git-master/r/49344
Reviewed-by: Jin Qian <jqian@nvidia.com>
Reviewed-by: Daniel Willemsen <dwillemsen@nvidia.com>
Rebase-Id: R58abf556bf542b8cf0ee6dd0f091806235f49623
|
|
Change-Id: I2037be4b1309ac1fe9af0ec3e644e0a1a4924857
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Reviewed-on: http://git-master/r/48796
Reviewed-by: Daniel Willemsen <dwillemsen@nvidia.com>
Rebase-Id: R0840ee98b17984f73f9a5396ab6f86d4d92b744e
|
|
use buffered memory to bypass L2
add memory barrier after cpu suspend
Bug 862494
Change-Id: I0592ebd6608d2581700b9ae965de3e7d8aa2cabe
Reviewed-on: http://git-master/r/47172
Tested-by: Jin Qian <jqian@nvidia.com>
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Tested-by: Scott Williams <scwilliams@nvidia.com>
Rebase-Id: Rfee82dddd83449e730ccfcd5f6359bbaa00582a7
|
|
Change-Id: I7b769bec8fc2dc0cd6db34e125f1cfd45aea8b12
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Rebase-Id: Rcf33e9438333a90b3aa9bf29925a277d65317f84
|
|
The standard cpu_suspend does not work if there is an exernal
L2 cache in the system individual CPUs are suspending without
shutting down the whole CPU complex. As a workaround for this
problem, we must save the CPU context to a non-cacheable region
of memory.
Change-Id: I2fffbc77ed4f17fe9710307aaacda80836bacee8
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Rebase-Id: R7328c032c2a13775aa09432e119ea845ded85930
|
|
Tag the stack frame created by the CPU register context push
macro with a magic number and validate that magic number in
the register context pop macro to ensure that the stack
remains balanced and uncorrupted.
Change-Id: I6aa876496e30e6e70c0c60800c1b35d217595153
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Rebase-Id: R78eba17c256f03bdd6457ca3ebb1ecdba5632e60
|
|
Define macros to ensure that the behavior of push/pop of the
context regsiter set is consistent across all callers.
Change-Id: If2e68764e9755979a205a57543b30438e9b7ff96
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Rebase-Id: Rb8f4984258e71c318e93fc709b18d1efdf5b2cc4
|
|
Modify the register usage of tegra_cpu_save so that the same set
of registers is saved to and restored from the stack.
Change-Id: I9a0e3ce80e0e1d4b47cbb984fb732fd612bf2c16
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Rebase-Id: R89e119278eb1d8f10f3c4e1c3c3203628de37a59
|
|
Change-Id: Ie2f619df4e5bff06960dcaa910a39d4cff78b879
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Rebase-Id: Ra75a8dba9e8f0fa57081a3fed9b3ef743b3c8796
|
|
Every call to tegra_cpu_save is always followed by a call to
tegra_cpu_exit_coherency. Simplify the callers of tegra_cpu_save
by folding the CPU context save functionality of cpu_suspend and
the coherency exit functionality into a single function called
tegra_cpu_suspend.
Change-Id: Ia71a663b2971685712d5b8a2b7e8b44fe1526f40
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Rebase-Id: R36c0c5f44608d0c099d928e19e36af2e7ba061d8
|
|
Define the SMP coherency exit code as a macro to allow it to be
inlined in assembly code that needs to control its register usage.
Change-Id: If5bd01241a92eb471cf59b4fc8445934fd4932b1
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Rebase-Id: R921ed4d46431115d164f73bacac16a68a9d32b0a
|
|
Clean up some rather fragile manipulation of the stack pointer in
the CPU suspend code. It's all unnecssary except in one case where
Tegra2 can abort a suspend because of activity on the other CPU.
Change-Id: Ic872364c5abd58f704b2afeeae4d8722f127d3bb
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
Rebase-Id: R5873dd120df2e98cc5bfcc74f86ebea6cc10f9b2
|
|
Separate the CPU context save and CPU coherency exit into separate
functions.
Change-Id: I7c5376677e293342b02b5bebdef6be2610522936
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
Rebase-Id: R17eb40d551e797448410cf6220dfba122faa702d
|
|
Add support for forced Tegra3 LP2 low power mode on the boot processor
(CPU 0) via the cluster control interface when all others are offline.
Switching to the LP CPU mode is also enabled with this change.
LP2 in idle and LP2 mode on the secondary processors is not yet
supported.
Change-Id: Icb898729f093be5e006c413f701532dd45228687
Signed-off-by: Scott Williams <scwilliams@nvidia.com>
Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
Rebase-Id: Rd5d8c2b0addfd6853033670b992ae082e4a0d9c8
|