Age | Commit message (Collapse) | Author |
|
Register syscore ops for modules whose context has to
saved/restore during entry/exit to LP0 state from CPU
Idle.
Bug 1254633
Change-Id: Idf4a67535754db3ccc2fc528469fb17ec198cee0
Signed-off-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Reviewed-on: http://git-master/r/299447
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
Create "/proc/sys/lazy_vfree_pages" file to control lazy vfree pages
Bug 1238957
Change-Id: I75a296ae035d8cedb817319d8f4a5579ae6cf1ba
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
Reviewed-on: http://git-master/r/289616
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
|
|
This reverts commit 11388c87d2abca1f01975ced28ce9eacea239104.
The issue is that no wake lock is held at the user space i.e by Power
Manager service.This is because the PowerManagerService fails to
acquire the Wakelock.In 3.8 the wakelock module in the kernel expects
the user process to have the capability of CAP_BLOCK_SUSPEND.Which the
powermangersevice does not have.
Bug 1274297
Bug 1384311
Change-Id: I3b696108d47278cf40abce8d5a9bd012f98f2925
Signed-off-by: Ajay Nandakumar <anandakumarm@nvidia.com>
(cherry picked from commit e8464e785027a15279a13e6e32cd1aecd22d5a00)
Reviewed-on: http://git-master/r/282698
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
Some device drivers require a callback to be called after the userspace
processes are frozen. This patch adds PM_USERSPACE_FROZEN workqueue
which is called after userspace processes are frozen but when the
kernel threads are still functioning.
Bug 1344551
Change-Id: I0e6fd7e2473db168d01c88bc0192326ceea92ebe
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/266774
(cherry picked from commit d964493291ef87eea1a2ee47b5b66305bb18bcf3)
Reviewed-on: http://git-master/r/274939
Tested-by: Sang-Hun Lee <sanlee@nvidia.com>
Reviewed-by: Kevin Huang (Eng-SW) <kevinh@nvidia.com>
Reviewed-by: Mitch Luban <mluban@nvidia.com>
|
|
Added GPU frequency min/max as PM QoS classes.
Bug 1330780
Change-Id: I2428c62748521c17e23b2df9ca409deda8b36160
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/267702
Reviewed-by: Ilan Aelion <iaelion@nvidia.com>
Reviewed-by: Mitch Luban <mluban@nvidia.com>
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
|
|
This is the 3.10.12 stable release
Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
|
|
Correct the issue with /proc/timer_list reported by Holger. When reading
from the proc file with a sufficiently small buffer, 2k so not really that
small, there was one could get hung trying to read the file a chunk at a
time.
The timer_list_start function failed to account for the possibility that
the offset was adjusted outside the timer_list_next.
Signed-off-by: Nathan Zimmer <nzimmer@sgi.com>
Reported-by: Holger Hans Peter Freyther <holger@freyther.de>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Berke Durak <berke.durak@xiphos.com>
Cc: Jeff Layton <jlayton@redhat.com>
Tested-by: Al Viro <viro@zeniv.linux.org.uk>
Cc: <stable@vger.kernel.org> # 3.10.x
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
(cherry picked from commit 231efe79024840d9665f61f19cce032aaa8d8cea)
Change-Id: Ie543eb9649a0cb0f12a3f74b291e065c9d23cf18
Reviewed-on: http://git-master/r/270878
Reviewed-by: Shridhar Rasal <srasal@nvidia.com>
Tested-by: Shridhar Rasal <srasal@nvidia.com>
Reviewed-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Tested-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Reviewed-by: Dan Willemsen <dwillemsen@nvidia.com>
|
|
Calling freeze_processes sets a global flag that will cause any
process that calls try_to_freeze to enter the refrigerator. It
skips sending a signal to the current task, but if the current
task ever hits try_to_freeze, all threads will be frozen and the
system will deadlock.
Set a new flag, PF_SUSPEND_TASK, on the task that calls
freeze_processes. The flag notifies the freezer that the thread
is involved in suspend and should not be frozen. Also add a
WARN_ON in thaw_processes if the caller does not have the
PF_SUSPEND_TASK flag set to catch if a different task calls
thaw_processes than the one that called freeze_processes, leaving
a task with PF_SUSPEND_TASK permanently set on it.
Threads that spawn off a task with PF_SUSPEND_TASK set (which
swsusp does) will also have PF_SUSPEND_TASK set, preventing them
from freezing while they are helping with suspend, but they need
to be dead by the time suspend is triggered, otherwise they may
run when userspace is expected to be frozen. Add a WARN_ON in
thaw_processes if more than one thread has the PF_SUSPEND_TASK
flag set.
Reported-and-tested-by: Michael Leun <lkml20130126@newton.leun.net>
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit 2b44c4db2e2f1765d35163a861d301038e0c8a75)
Change-Id: I12e00d2ba61db827c07b8f4ef48e88a722f02d71
Reviewed-on: http://git-master/r/259126
Reviewed-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Tested-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
Adding bool prompt and help description for CONFIG_WAKELOCK
and CONFIG_HAS_WAKELOCK options in Kconfig file to ease
menuconfig operations
Bug 1314808
Change-Id: I5c450ef0994a08c1bf51e8c9849bb96c69c69081
Signed-off-by: Naveen Kumar S <nkumars@nvidia.com>
Reviewed-on: http://git-master/r/243459
(cherry picked from commit 2dda2db4db3fca12301cab3b9c59fba758573652)
Reviewed-on: http://git-master/r/244781
(cherry picked from commit f3c8128a972e0e9f4df0a6b3d364cb14b5f29a9c)
Reviewed-on: http://git-master/r/247680
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Kiran Adduri <kadduri@nvidia.com>
Reviewed-by: Laxman Dewangan <ldewangan@nvidia.com>
|
|
When system enters into suspend, it disable all irqs in single
function call. This disables EARLY_RESUME irqs also along with
normal irqs.
The EARLY_RESUME irqs get enabled in sys_core_ops->resume and
non-EARLY_RESUME irqs get enabled in normal system resume path.
When suspend_noirq failed or suspend is aborted for any reason,
the EARLY_RESUME irqs do not get enabled as sys_core_ops->resume()
call did not happen. It only enables the non-EARLY_RESUME irqs in normal
system resume path. This makes the EARLY_RESUME irqs interrupt to be
disable for remaining life of system.
Add checks on normal irq_resume() whether EARLY_RESUME irqs have been
enabled or not and if not then enable it forcefully.
bug 1282448
Change-Id: I7ffffd725675ca635310eb4913a1f885d2e42e37
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/235000
(Cherrypicked commit 91262b293e7c061f6c80488fa235811362e128e6)
Reviewed-on: http://git-master/r/236600
Reviewed-by: Automatic_Commit_Validation_User
|
|
Make the default RT_RUNTIME_SHARE setting reflect the most common
throttle role, that of safety mechanism to protect the box.
Bug 1269903
Change-Id: Id4ccf0095ea254f2e15fddc7ab02069f7f60a7c0
Signed-off-by: Mike Galbraith <bitbucket@online.de>
Reviewed-on: http://git-master/r/234274
Reviewed-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
Reviewed-by: Paul Walmsley <pwalmsley@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Tested-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
Reviewed-by: Diwakar Tundlam <dtundlam@nvidia.com>
|
|
Change-Id: If8410d4059f778b475fd28663ade50c5132cc540
Signed-off-by: Sami Liedes <sliedes@nvidia.com>
Reviewed-on: http://git-master/r/225723
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com>
|
|
migration_call() will do all the things that update_runtime() does.
So it seems update_runtime() is a redundant notifier, remove it.
Furthermore, there is potential risk that the current code will catch
BUG_ON at line 687 of rt.c when do cpu hotplug while there are realtime
threads running because of enable runtime twice.
Change-Id: I0fdad8d5a1cebb845d3f308b205dbd6517c3e4de
Cc: bitbucket@online.de
Signed-off-by: Neil Zhang <zhangwm@marvell.com>
Reviewed-on: http://git-master/r/215596
(cherry picked from commit 8f646de983f24361814d9a6ca679845fb2265807)
Reviewed-on: http://git-master/r/223068
Reviewed-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
Tested-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
Reviewed-by: Paul Walmsley <pwalmsley@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Diwakar Tundlam <dtundlam@nvidia.com>
GVS: Gerrit_Virtual_Submit
|
|
Print/display the cause of error if suspend fails. This helps in
debugging the failure case.
(Cherrypicked commit
2a5cd5441333ffd1b8e72c2b0d70734b9ca5fdeb)
Reviewed-on: http://git-master/r/202454
Change-Id: I5fa1ea4a542d8ee8f8bdf106a97eefc2c5e3d8d3
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/215100
|
|
Since cpustat[CPUTIME_IDLE] is never connected to ts->idle_sleeptime,
never read from cpustat[CPUTIME_IDLE] when reporting stats in
/proc/stat.
Note this was rejected by Michal Hocko when it was initially proposed
by Martin Schwidefsky in LKML, so if you want to upstream it, better
find an alternative (either completely disable cpustat[CPUTIME_IDLE]
for CONFIG_NO_HZ or somehow connect them to keep them in sync.)
bug 1190321
Change-Id: Idc92488910b826aff850a010016d8326c7ab9e6c
Signed-off-by: Bo Yan <byan@nvidia.com>
Reviewed-on: http://git-master/r/212224
(cherry picked from commit e7a9220f5883bf3816e24895a34239a34a7d9ece)
Reviewed-on: http://git-master/r/212907
GVS: Gerrit_Virtual_Submit
Reviewed-by: Liang Cheng (SW) <licheng@nvidia.com>
|
|
Reinitialize rq->next_balance when a CPU is hot-added. Otherwise,
scheduler domain rebalancing may be skipped if rq->next_balance was
set to a future time when the CPU was last active, and the
newly-re-added CPU is in idle_balance(). As a result, the
newly-re-added CPU will remain idle with no tasks scheduled until the
softlockup watchdog runs - potentially 4 seconds later. This can
waste energy and reduce performance.
This behavior can be observed in some SoC kernels, which use CPU
hotplug to dynamically remove and add CPUs in response to load. In
one case that triggered this behavior,
0. the system started with all cores enabled, running multi-threaded
CPU-bound code;
1. the system entered some single-threaded code;
2. a CPU went idle and was hot-removed;
3. the system started executing a multi-threaded CPU-bound task;
4. the CPU from event 2 was re-added, to respond to the load.
The time interval between events 2 and 4 was approximately 300
milliseconds.
Of course, ideally CPU hotplug would not be used in this manner,
but this patch does appear to fix a real bug.
Nvidia folks: this patch is submitted as at least a partial fix for
bug 1243368 ("[sched] Load-balancing not happening correctly after
cores brought online")
Change-Id: Iabac21e110402bb581b7db40c42babc951d378d0
Signed-off-by: Paul Walmsley <pwalmsley@nvidia.com>
Cc: Peter Boonstoppel <pboonstoppel@nvidia.com>
Reviewed-on: http://git-master/r/206918
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Amit Kamath <akamath@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
Reviewed-by: Diwakar Tundlam <dtundlam@nvidia.com>
|
|
Removed the check if device has wakeup capablity
while registration as the wakeup policy has to come from
user space as per Documentation/power/devices.txt
Bug 1219152
Change-Id: I4994c603aac0afd54381dcaec239f2315831849f
Signed-off-by: Chaitanya Bandi <bandik@nvidia.com>
Reviewed-on: http://git-master/r/195109
Reviewed-by: Laxman Dewangan <ldewangan@nvidia.com>
|
|
This pm_qos_update_request_timeout() was introduced without being exported.
Should export it as all of the other PM QoS APIs so those drivers compiled as
modules can use it.
Change-Id: Ie51ce52db4ca633117fe18441c42b562220399e8
Signed-off-by: Li Li <lli5@nvidia.com>
Reviewed-on: http://git-master/r/189306
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Eric Miao <emiao@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
Commit "gcov-kernel: patch for Android toolchain 4.4.x support" broke
support for gcov on vanilla gcc. Introduce #ifdefs to make it work on
both of them.
Since the gcov ABI for Android gcc is different, the build system
must set CONFIG_GCOV_TOOLCHAIN_IS_ANDROID when compiling with an
Android toolchain.
Also remove a few magic numbers from the original gcov code and fix a
unused function warning.
Bug 1155439
Change-Id: I7c18938e5503df4ee1c3f8de2b6f5a99ceef7f71
Signed-off-by: Tuomas Tynkkynen <ttynkkynen@nvidia.com>
Reviewed-on: http://git-master/r/162711
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com>
|
|
Doing a Exponential moving average per nr_running++/-- does not
guarantee a fixed sample rate which induces errors if there are lots of
threads being enqueued/dequeued from the rq (Linpack mt). Instead of
keeping track of the avg, the scheduler now keeps track of the integral
of nr_running and allows the readers to perform filtering on top.
Implemented a proper exponential moving average for the runnables
governor and a straight 100ms average for the balanced governor. Tweaked
the thresholds for the runnables governor to minimize latency. Also,
decreased sample_rate for the runnables governor to the absolute minimum
of 10msecs.
Updated to K3.4
Change-Id: Ia25bf8baf2a1a015ba188b2c06e551e89b16c5f8
Signed-off-by: Sai Charan Gurrappadi <sgurrappadi@nvidia.com>
Signed-off-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Reviewed-on: http://git-master/r/131147
Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com>
Rebase-Id: R7a20292e2cfb551a875962f0903647f69b78a0ab
|
|
[perf] The runnable threads governor only looks at the average number of
runnables in the system to make a decision when bringing cores
offline/online. First pass; tweaks thresholds and delays to reduce
decision latency to about ~50-70ms per core (from ~100-150ms per core)
Change-Id: Idd3b268a74a8f56ad3fc0e5c7f388174d1b6611f
Signed-off-by: Sai Charan Gurrappadi <sgurrappadi@nvidia.com>
Reviewed-on: http://git-master/r/124679
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Diwakar Tundlam <dtundlam@nvidia.com>
Rebase-Id: R176b97c14a54c057c2e7f4e57237e839ac714d88
|
|
After a kthread is created it signals the requester using complete()
and enters TASK_UNINTERRUPTIBLE. However, since complete() wakes up
the requesting thread this can cause a preemption. The preemption will
not remove the task from the runqueue (for that schedule() has to be
invoked directly).
This is a problem if directly after kthread creation you try to do a
kthread_bind(), which will block in HZ steps until the thread is off
the runqueue.
This patch disables preemption during complete(), since we call
schedule() directly afterwards, so it will correctly enter
TASK_UNINTERRUPTIBLE. This speeds up kthread creation/binding during
cpu hotplug significantly.
Change-Id: I856ddd4e01ebdb198ba90f343b4a0c5933fd2b23
Signed-off-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
Rebase-Id: Rae8fc889bf6abbe91080417be672311948b6a8dd
|
|
Port commit 1802afb2a (http://git-master/r/111637) from v3.1
Re-compute time-average nr_running when it is read. This would
prevent reading stalled average value if there were no run-queue
changes for a long time. New average value is returned to the reader,
but not stored to avoid concurrent writes. Light-weight sequential
counter synchronization is used to assure data consistency for
re-computing average.
Original author: Alex Frid <afrid@nvidia.com>
Signed-off-by: Alex Frid <afrid@nvidia.com>
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Change-Id: Ic486006d62436fb61cda4ab6897e933f5c102b52
Rebase-Id: Re0ca57c84a644e8e2d474930379cdcc386a2135a
|
|
Port commit 0b5a8a6f3 (http://git-master/r/111635) from v3.1
Compute the time-average number of running tasks per run-queue for a
trailing window of a fixed time period. The delta add/sub to the
average value is weighted by the amount of time per nr_running value
relative to the total measurement period.
Original author: Diwakar Tundlam <dtundlam@nvidia.com>
Change-Id: I076e24ff4ed65bed3b8dd8d2b279a503318071ff
Signed-off-by: Diwakar Tundlam <dtundlam@nvidia.com>
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Rebase-Id: R1760349117674c9cf5ea63046f937a7c7a0186f6
|
|
Gcov's internal data structures, on which the kernel depends on, have
changed in GCC 4.6. This patch adds support for GCC 4.6 and should still
work on GCC 4.4 too.
For reference, look at 'struct gcov_fn_info' in GCC's 'gcc/gcov-io.h',
near line 698:
https://android.googlesource.com/toolchain/gcc/+/master/gcc-4.4.3/
https://android.googlesource.com/toolchain/gcc/+/master/gcc-4.6/
Bug 1003822
Change-Id: I527736f944c80b8b345d1685669c0b99eb38fb66
Signed-off-by: Tuomas Tynkkynen <ttynkkynen@nvidia.com>
Reviewed-on: http://git-master/r/110073
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com>
Tested-by: Juha Tukkinen <jtukkinen@nvidia.com>
Rebase-Id: Rfdb0c2f3801fc41d3ed4b3696634adf79bdc232b
|
|
- Remove redefintion of stub function
Change-Id: Id31c25707347cfa2947a83317ba5bf5bacfaa442
Reviewed-on: http://git-master/r/115069
Reviewed-by: Krishna Monian <kmonian@nvidia.com>
Tested-by: Krishna Monian <kmonian@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Bo Yan <byan@nvidia.com>
Rebase-Id: R50526c83eaec6b30319fa4adb7033d81e749399d
|
|
Rebase-Id: R940fad74c7e91ef3d1d3d589a48064ccb7335541
|
|
after-upstream-android
Conflicts:
arch/arm/common/Kconfig
arch/arm/mm/Makefile
arch/arm/mm/cache-l2x0.c
arch/arm/mm/mmu.c
drivers/input/Kconfig
drivers/input/Makefile
drivers/power/Kconfig
kernel/futex.c
|
|
Based on work by George G. Davis <gdavis@mvista.com>.
See http://lwn.net/Articles/390419/
Change-Id: I8df700d20a154e179f8cf6cdfe4015efc5d384f2
Signed-off-by: Juha Tukkinen <jtukkinen@nvidia.com>
Reviewed-on: http://git-master/r/62998
Reviewed-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
Rebase-Id: R2607a46c8bd1e521abe44a57a5ccf7317333d6c9
|
|
Based on work done for LTP in
http://ltp.cvs.sourceforge.net/viewvc/ltp/utils/analysis/gcov-kernel
Patch originates from Motorola kernel team (mkw348@motorola.com).
Change-Id: Ibb2a7c8afd79051e8d6c7fde83f04745be14f5fd
Signed-off-by: Juha Tukkinen <jtukkinen@nvidia.com>
Reviewed-on: http://git-master/r/62997
Reviewed-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Reviewed-by: Scott Williams <scwilliams@nvidia.com>
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
Rebase-Id: R67557d023bc94fbe900bfc9deef2f5de9955ea43
|
|
|
|
For testing purposes it is useful to be able to disable
PM Qos.
Bug 1020898
Bug 917572
Reviewed-on: http://git-master/r/124667
Change-Id: I266f5b5730cfe4705197d8b09db7f9eda6766c7c
Signed-off-by: Antti P Miettinen <amiettinen@nvidia.com>
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Rebase-Id: Re2088674f90436e0b9dd74310d5cda1f9e2868e4
|
|
Bug 940061
Change-Id: Ibae842fdc3af3c92ec7e6125c602417110d8b55e
Signed-off-by: Gaurav Sarode <gsarode@nvidia.com>
Reviewed-on: http://git-master/r/84521
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Tested-by: Aleksandr Frid <afrid@nvidia.com>
Reviewed-by: Diwakar Tundlam <dtundlam@nvidia.com>
Rebase-Id: R830d4e99f1e03b61a8c4e52e11645b7ed2f10f56
|
|
Add minimum and maximum CPU frequency as PM QoS parameters.
Bug 888312
Change-Id: I18abddded35a044a6ad8365035e31d1a2213a329
Reviewed-on: http://git-master/r/72206
Signed-off-by: Antti P Miettinen <amiettinen@nvidia.com>
Signed-off-by: Varun Wadekar <vwadekar@nvidia.com>
Reviewed-on: http://git-master/r/75883
Reviewed-by: Automatic_Commit_Validation_User
Rebase-Id: R1007bbef60489ecc81a9acd0ce3b0abfa9a05f3e
|
|
Bug 894200
Change-Id: Ieb009a13c6ef9bca2388e234eb973d65a4e3a58b
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/71034
Reviewed-by: Rohan Somvanshi <rsomvanshi@nvidia.com>
Tested-by: Rohan Somvanshi <rsomvanshi@nvidia.com>
Rebase-Id: R5791d3cb0bb66f3b8079f5a8af5fa758fb3c6705
|
|
Simple trace points for measuring hotplug up/down times.
Bug 960310
Change-Id: I1927aae6edb74cba7ca3e9522d138407b48325dc
Signed-off-by: Antti P Miettinen <amiettinen@nvidia.com>
Reviewed-on: http://git-master/r/92920
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Satya Popuri <spopuri@nvidia.com>
Reviewed-by: Diwakar Tundlam <dtundlam@nvidia.com>
Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com>
Rebase-Id: R9a5ff4f33d9d5f06ea7b4660a6567680398eefb1
|
|
Change-Id: Ica22a3f92c8ca33a5779a74d3afad775736b1663
Signed-off-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Reviewed-on: http://git-master/r/78450
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: Varun Wadekar <vwadekar@nvidia.com>
Rebase-Id: R02a57e1de4d0a5bf9f0a7fcdebdbc24470b9c4eb
|
|
Add a new module that will dump the contents of the ftrace ring buffer.
Data is compressed and can be in ascii or binary form. Data will
automatically dump on kernel panic to console. Data can be dumped by
reading /proc/tracedump. See tracedump.h for details.
Change-Id: I7b7afc3def0b88629dd120d17e43858306a8f357
Signed-off-by: Liang Cheng <licheng@nvidia.com>
Reviewed-on: http://git-master/r/69494
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Dan Willemsen <dwillemsen@nvidia.com>
Rebase-Id: Rb7921bf859e968422af1a98b971415462d8c57f0
|
|
This module lets subsystem authors prioritize ftrace events
by calling tracelevel_register(...). High priority traces
will be automatically enabled on boot. See tracelevel.h
for more details
Original-Change-Id: If03699e96c598bdcf93b9a9f73918ce7b0c750cb
Reviewed-on: http://git-master/r/40290
Reviewed-by: Alon Farchy <afarchy@nvidia.com>
Tested-by: Alon Farchy <afarchy@nvidia.com>
Reviewed-by: Daniel Willemsen <dwillemsen@nvidia.com>
Tested-by: Daniel Solomon <daniels@nvidia.com>
Tested-by: Simone Willett <swillett@nvidia.com>
Rebase-Id: R49f59f81d61907f66fdf892130a1b4dc6575d40e
|
|
commit b22ce2785d97423846206cceec4efee0c4afd980 upstream.
If !PREEMPT, a kworker running work items back to back can hog CPU.
This becomes dangerous when a self-requeueing work item which is
waiting for something to happen races against stop_machine. Such
self-requeueing work item would requeue itself indefinitely hogging
the kworker and CPU it's running on while stop_machine would wait for
that CPU to enter stop_machine while preventing anything else from
happening on all other CPUs. The two would deadlock.
Jamie Liu reports that this deadlock scenario exists around
scsi_requeue_run_queue() and libata port multiplier support, where one
port may exclude command processing from other ports. With the right
timing, scsi_requeue_run_queue() can end up requeueing itself trying
to execute an IO which is asked to be retried while another device has
an exclusive access, which in turn can't make forward progress due to
stop_machine.
Fix it by invoking cond_resched() after executing each work item.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Jamie Liu <jamieliu@google.com>
References: http://thread.gmane.org/gmane.linux.kernel/1552567
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 84a78a6504f5c5394a8e558702e5b54131f01d14 upstream.
Correct an issue with /proc/timer_list reported by Holger.
When reading from the proc file with a sufficiently small buffer, 2k so
not really that small, there was one could get hung trying to read the
file a chunk at a time.
The timer_list_start function failed to account for the possibility that
the offset was adjusted outside the timer_list_next.
Signed-off-by: Nathan Zimmer <nzimmer@sgi.com>
Reported-by: Holger Hans Peter Freyther <holger@freyther.de>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Berke Durak <berke.durak@xiphos.com>
Cc: Jeff Layton <jlayton@redhat.com>
Tested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Avoid waking up every thread sleeping in a sigtimedwait call during
suspend and resume by calling a freezable blocking call. Previous
patches modified the freezer to avoid sending wakeups to threads
that are blocked in freezable blocking calls.
This call was selected to be converted to a freezable call because
it doesn't hold any locks or release any resources when interrupted
that might be needed by another freezing task or a kernel driver
during suspend, and is a common site where idle userspace tasks are
blocked.
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit a2d5f1f5d941593e61071dc78e9de228eda5475f)
Change-Id: I854b1ea03eb338198a7d0cbdaaf4abfc4f0e936d
Reviewed-on: http://git-master/r/228700
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Tested-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
Avoid waking up every thread sleeping in a nanosleep call during
suspend and resume by calling a freezable blocking call. Previous
patches modified the freezer to avoid sending wakeups to threads
that are blocked in freezable blocking calls.
This call was selected to be converted to a freezable call because
it doesn't hold any locks or release any resources when interrupted
that might be needed by another freezing task or a kernel driver
during suspend, and is a common site where idle userspace tasks are
blocked.
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit b0f8c44f30e58c3aaaaaf864d5c3d3cc2e8a4c2d)
Change-Id: Ib8dbab690b5adf33a8fe4194ae961ed7d58d3f26
Reviewed-on: http://git-master/r/228699
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Tested-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
Avoid waking up every thread sleeping in a futex_wait call during
suspend and resume by calling a freezable blocking call. Previous
patches modified the freezer to avoid sending wakeups to threads
that are blocked in freezable blocking calls.
This call was selected to be converted to a freezable call because
it doesn't hold any locks or release any resources when interrupted
that might be needed by another freezing task or a kernel driver
during suspend, and is a common site where idle userspace tasks are
blocked.
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit 56467c7697f5aef6974501fbe2c3e63674583549)
Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
|
|
Android goes through suspend/resume very often (every few seconds when
on a busy wifi network with the screen off), and a significant portion
of the energy used to go in and out of suspend is spent in the
freezer. If a task has called freezer_do_not_count(), don't bother
waking it up. If it happens to wake up later it will call
freezer_count() and immediately enter the refrigerator.
Combined with patches to convert freezable helpers to use
freezer_do_not_count() and convert common sites where idle userspace
tasks are blocked to use the freezable helpers, this reduces the
time and energy required to suspend and resume.
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit 613f5d13b569859171f0896fbc73ee0bfa811fda)
Change-Id: I184a3a065f0c6b951dc129c722f6e42268da81f7
Reviewed-on: http://git-master/r/228691
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Tested-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
backoff
All tasks can easily be frozen in under 10 ms, switch to using
an initial 1 ms sleep followed by exponential backoff until
8 ms. Also convert the printed time to ms instead of centiseconds.
Acked-by: Pavel Machek <pavel@ucw.cz>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit 18ad0c6297df1d671ecea83b608cd9e432642a05)
Change-Id: I470afcff3d1de66161e9545aa940a5910d41d122
Signed-off-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Reviewed-on: http://git-master/r/228690
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
debug_check_no_locks_held
The only existing caller to debug_check_no_locks_held calls it
with 'current' as the task, and the freezer needs to call
debug_check_no_locks_held but doesn't already have a current
task pointer, so remove the argument. It is already assuming
that the current task is relevant by dumping the current stack
trace as part of the warning.
This was originally part of 6aa9707099c (lockdep: check that
no locks held at freeze time) which was reverted in
dbf520a9d7d4.
Original-author: Mandeep Singh Baines <msb@chromium.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit 1b1d2fb4444231f25ddabc598aa2b5a9c0833fba)
Change-Id: I6a07ccb476817c8045cb3d76709d1569ec6f42c8
Reviewed-on: http://git-master/r/228689
GVS: Gerrit_Virtual_Submit
Reviewed-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Tested-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
commit 8c4f3c3fa9681dc549cd35419b259496082fef8b upstream.
There's been a nasty bug that would show up and not give much info.
The bug displayed the following warning:
WARNING: at kernel/trace/ftrace.c:1529 __ftrace_hash_rec_update+0x1e3/0x230()
Pid: 20903, comm: bash Tainted: G O 3.6.11+ #38405.trunk
Call Trace:
[<ffffffff8103e5ff>] warn_slowpath_common+0x7f/0xc0
[<ffffffff8103e65a>] warn_slowpath_null+0x1a/0x20
[<ffffffff810c2ee3>] __ftrace_hash_rec_update+0x1e3/0x230
[<ffffffff810c4f28>] ftrace_hash_move+0x28/0x1d0
[<ffffffff811401cc>] ? kfree+0x2c/0x110
[<ffffffff810c68ee>] ftrace_regex_release+0x8e/0x150
[<ffffffff81149f1e>] __fput+0xae/0x220
[<ffffffff8114a09e>] ____fput+0xe/0x10
[<ffffffff8105fa22>] task_work_run+0x72/0x90
[<ffffffff810028ec>] do_notify_resume+0x6c/0xc0
[<ffffffff8126596e>] ? trace_hardirqs_on_thunk+0x3a/0x3c
[<ffffffff815c0f88>] int_signal+0x12/0x17
---[ end trace 793179526ee09b2c ]---
It was finally narrowed down to unloading a module that was being traced.
It was actually more than that. When functions are being traced, there's
a table of all functions that have a ref count of the number of active
tracers attached to that function. When a function trace callback is
registered to a function, the function's record ref count is incremented.
When it is unregistered, the function's record ref count is decremented.
If an inconsistency is detected (ref count goes below zero) the above
warning is shown and the function tracing is permanently disabled until
reboot.
The ftrace callback ops holds a hash of functions that it filters on
(and/or filters off). If the hash is empty, the default means to filter
all functions (for the filter_hash) or to disable no functions (for the
notrace_hash).
When a module is unloaded, it frees the function records that represent
the module functions. These records exist on their own pages, that is
function records for one module will not exist on the same page as
function records for other modules or even the core kernel.
Now when a module unloads, the records that represents its functions are
freed. When the module is loaded again, the records are recreated with
a default ref count of zero (unless there's a callback that traces all
functions, then they will also be traced, and the ref count will be
incremented).
The problem is that if an ftrace callback hash includes functions of the
module being unloaded, those hash entries will not be removed. If the
module is reloaded in the same location, the hash entries still point
to the functions of the module but the module's ref counts do not reflect
that.
With the help of Steve and Joern, we found a reproducer:
Using uinput module and uinput_release function.
cd /sys/kernel/debug/tracing
modprobe uinput
echo uinput_release > set_ftrace_filter
echo function > current_tracer
rmmod uinput
modprobe uinput
# check /proc/modules to see if loaded in same addr, otherwise try again
echo nop > current_tracer
[BOOM]
The above loads the uinput module, which creates a table of functions that
can be traced within the module.
We add uinput_release to the filter_hash to trace just that function.
Enable function tracincg, which increments the ref count of the record
associated to uinput_release.
Remove uinput, which frees the records including the one that represents
uinput_release.
Load the uinput module again (and make sure it's at the same address).
This recreates the function records all with a ref count of zero,
including uinput_release.
Disable function tracing, which will decrement the ref count for uinput_release
which is now zero because of the module removal and reload, and we have
a mismatch (below zero ref count).
The solution is to check all currently tracing ftrace callbacks to see if any
are tracing any of the module's functions when a module is loaded (it already does
that with callbacks that trace all functions). If a callback happens to have
a module function being traced, it increments that records ref count and starts
tracing that function.
There may be a strange side effect with this, where tracing module functions
on unload and then reloading a new module may have that new module's functions
being traced. This may be something that confuses the user, but it's not
a big deal. Another approach is to disable all callback hashes on module unload,
but this leaves some ftrace callbacks that may not be registered, but can
still have hashes tracing the module's function where ftrace doesn't know about
it. That situation can cause the same bug. This solution solves that case too.
Another benefit of this solution, is it is possible to trace a module's
function on unload and load.
Link: http://lkml.kernel.org/r/20130705142629.GA325@redhat.com
Reported-by: Jörn Engel <joern@logfs.org>
Reported-by: Dave Jones <davej@redhat.com>
Reported-by: Steve Hodgson <steve@purestorage.com>
Tested-by: Steve Hodgson <steve@purestorage.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit c6c2401d8bbaf9edc189b4c35a8cb2780b8b988e upstream.
Uprobes suffer the same problem that kprobes have. There's a race between
writing to the "enable" file and removing the probe. The probe checks for
it being in use and if it is not, goes about deleting the probe and the
event that represents it. But the problem with that is, after it checks
if it is in use it can be enabled, and the deletion of the event (access
to the probe) will fail, as it is in use. But the uprobe will still be
deleted. This is a problem as the event can reference the uprobe that
was deleted.
The fix is to remove the event first, and check to make sure the event
removal succeeds. Then it is safe to remove the probe.
When the event exists, either ftrace or perf can enable the probe and
prevent the event from being removed.
Link: http://lkml.kernel.org/r/20130704034038.991525256@goodmis.org
Acked-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 40c32592668b727cbfcf7b1c0567f581bd62a5e4 upstream.
When a probe is being removed, it cleans up the event files that correspond
to the probe. But there is a race between writing to one of these files
and deleting the probe. This is especially true for the "enable" file.
CPU 0 CPU 1
----- -----
fd = open("enable",O_WRONLY);
probes_open()
release_all_trace_probes()
unregister_trace_probe()
if (trace_probe_is_enabled(tp))
return -EBUSY
write(fd, "1", 1)
__ftrace_set_clr_event()
call->class->reg()
(kprobe_register)
enable_trace_probe(tp)
__unregister_trace_probe(tp);
list_del(&tp->list)
unregister_probe_event(tp) <-- fails!
free_trace_probe(tp)
write(fd, "0", 1)
__ftrace_set_clr_event()
call->class->unreg
(kprobe_register)
disable_trace_probe(tp) <-- BOOM!
A test program was written that used two threads to simulate the
above scenario adding a nanosleep() interval to change the timings
and after several thousand runs, it was able to trigger this bug
and crash:
BUG: unable to handle kernel paging request at 00000005000000f9
IP: [<ffffffff810dee70>] probes_open+0x3b/0xa7
PGD 7808a067 PUD 0
Oops: 0000 [#1] PREEMPT SMP
Dumping ftrace buffer:
---------------------------------
Modules linked in: ipt_MASQUERADE sunrpc ip6t_REJECT nf_conntrack_ipv6
CPU: 1 PID: 2070 Comm: test-kprobe-rem Not tainted 3.11.0-rc3-test+ #47
Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
task: ffff880077756440 ti: ffff880076e52000 task.ti: ffff880076e52000
RIP: 0010:[<ffffffff810dee70>] [<ffffffff810dee70>] probes_open+0x3b/0xa7
RSP: 0018:ffff880076e53c38 EFLAGS: 00010203
RAX: 0000000500000001 RBX: ffff88007844f440 RCX: 0000000000000003
RDX: 0000000000000003 RSI: 0000000000000003 RDI: ffff880076e52000
RBP: ffff880076e53c58 R08: ffff880076e53bd8 R09: 0000000000000000
R10: ffff880077756440 R11: 0000000000000006 R12: ffffffff810dee35
R13: ffff880079250418 R14: 0000000000000000 R15: ffff88007844f450
FS: 00007f87a276f700(0000) GS:ffff88007d480000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00000005000000f9 CR3: 0000000077262000 CR4: 00000000000007e0
Stack:
ffff880076e53c58 ffffffff81219ea0 ffff88007844f440 ffffffff810dee35
ffff880076e53ca8 ffffffff81130f78 ffff8800772986c0 ffff8800796f93a0
ffffffff81d1b5d8 ffff880076e53e04 0000000000000000 ffff88007844f440
Call Trace:
[<ffffffff81219ea0>] ? security_file_open+0x2c/0x30
[<ffffffff810dee35>] ? unregister_trace_probe+0x4b/0x4b
[<ffffffff81130f78>] do_dentry_open+0x162/0x226
[<ffffffff81131186>] finish_open+0x46/0x54
[<ffffffff8113f30b>] do_last+0x7f6/0x996
[<ffffffff8113cc6f>] ? inode_permission+0x42/0x44
[<ffffffff8113f6dd>] path_openat+0x232/0x496
[<ffffffff8113fc30>] do_filp_open+0x3a/0x8a
[<ffffffff8114ab32>] ? __alloc_fd+0x168/0x17a
[<ffffffff81131f4e>] do_sys_open+0x70/0x102
[<ffffffff8108f06e>] ? trace_hardirqs_on_caller+0x160/0x197
[<ffffffff81131ffe>] SyS_open+0x1e/0x20
[<ffffffff81522742>] system_call_fastpath+0x16/0x1b
Code: e5 41 54 53 48 89 f3 48 83 ec 10 48 23 56 78 48 39 c2 75 6c 31 f6 48 c7
RIP [<ffffffff810dee70>] probes_open+0x3b/0xa7
RSP <ffff880076e53c38>
CR2: 00000005000000f9
---[ end trace 35f17d68fc569897 ]---
The unregister_trace_probe() must be done first, and if it fails it must
fail the removal of the kprobe.
Several changes have already been made by Oleg Nesterov and Masami Hiramatsu
to allow moving the unregister_probe_event() before the removal of
the probe and exit the function if it fails. This prevents the tp
structure from being used after it is freed.
Link: http://lkml.kernel.org/r/20130704034038.819592356@goodmis.org
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|