Age | Commit message (Collapse) | Author |
|
Using Tegra WDT to trigger FIQ when system is in soft lockup.
Bug 1581432
Change-Id: I853a88a3f6e9402c978db18c5a63e903c582040a
Signed-off-by: Renn Wu <rewu@nvidia.com>
Reviewed-on: http://git-master/r/265871
(cherry picked from commit f115f435d471af22ddec5e9d969662f79193f846)
Reviewed-on: http://git-master/r/680353
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: Winnie Hsu <whsu@nvidia.com>
|
|
Here we are trying to reduce power usage through the
use of macros cpu_relaxed_read and relaxed version of
idle_cpu().
Bug 1440421
Change-Id: I0a8c5d358c154782b41570059d03d7e0de87e82c
Signed-off-by: Sumit Singh <sumsingh@nvidia.com>
Reviewed-on: http://git-master/r/426485
Reviewed-by: Sri Krishna Chowdary <schowdary@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com>
|
|
Defining relaxed version of idle_cpu, which uses
macro cpu_relaxed_read_long, that will be used to
enhance power efficiency.
bug 1440421
Change-Id: I6ba2185632ad0fba766a2548b0ddac743defb1b9
Signed-off-by: Sumit Singh <sumsingh@nvidia.com>
Reviewed-on: http://git-master/r/426484
Reviewed-by: Sri Krishna Chowdary <schowdary@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com>
|
|
Here we are trying to reduce power usage through the
use of macros cpu_relaxed_read_short and cpu_relaxed_read.
Bug 1440421
Change-Id: I114d122cf58b1cf7b93b4b5f5d712360a2f1e096
Signed-off-by: Sumit Singh <sumsingh@nvidia.com>
Reviewed-on: http://git-master/r/425947
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Sri Krishna Chowdary <schowdary@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com>
|
|
Bug 200012742
futex-prevent-requeue-pi-on-same-futex.patch futex: Forbid uaddr == uaddr2 in futex_requeue(..., requeue_pi=1)
If uaddr == uaddr2, then we have broken the rule of only requeueing from
a non-pi futex to a pi futex with this call. If we attempt this, then
dangling pointers may be left for rt_waiter resulting in an exploitable
condition.
This change brings futex_requeue() in line with futex_wait_requeue_pi()
which performs the same check as per commit 6f7b0a2a5c0f ("futex: Forbid
uaddr == uaddr2 in futex_wait_requeue_pi()")
[ tglx: Compare the resulting keys as well, as uaddrs might be
different depending on the mapping ]
Fixes CVE-2014-3153.
Reported-by: Pinkie Pie
Signed-off-by: Will Drewry <wad@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change-Id: I98b5f95d3f5c9e4d35c3aeec22960fdb34731c18
Reviewed-on: http://git-master/r/424612
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
into promotion_build
Change-Id: I9418a05ad5c56b2e902249218bac2fa594d99f56
|
|
while_each_thread() and next_thread() should die, almost every lockless
usage is wrong.
1. Unless g == current, the lockless while_each_thread() is not safe.
while_each_thread(g, t) can loop forever if g exits, next_thread()
can't reach the unhashed thread in this case. Note that this can
happen even if g is the group leader, it can exec.
2. Even if while_each_thread() itself was correct, people often use
it wrongly.
It was never safe to just take rcu_read_lock() and loop unless
you verify that pid_alive(g) == T, even the first next_thread()
can point to the already freed/reused memory.
This patch adds signal_struct->thread_head and task->thread_node to
create the normal rcu-safe list with the stable head. The new
for_each_thread(g, t) helper is always safe under rcu_read_lock() as
long as this task_struct can't go away.
Note: of course it is ugly to have both task_struct->thread_node and the
old task_struct->thread_group, we will kill it later, after we change
the users of while_each_thread() to use for_each_thread().
Perhaps we can kill it even before we convert all users, we can
reimplement next_thread(t) using the new thread_head/thread_node. But
we can't do this right now because this will lead to subtle behavioural
changes. For example, do/while_each_thread() always sees at least one
task, while for_each_thread() can do nothing if the whole thread group
has died. Or thread_group_empty(), currently its semantics is not clear
unless thread_group_leader(p) and we need to audit the callers before we
can change it.
So this patch adds the new interface which has to coexist with the old
one for some time, hopefully the next changes will be more or less
straightforward and the old one will go away soon.
Bug 200004307
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Sergey Dyasly <dserrg@gmail.com>
Tested-by: Sergey Dyasly <dserrg@gmail.com>
Reviewed-by: Sameer Nanda <snanda@chromium.org>
Acked-by: David Rientjes <rientjes@google.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mandeep Singh Baines <msb@chromium.org>
Cc: "Ma, Xindong" <xindong.ma@intel.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: "Tu, Xiaobing" <xiaobing.tu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 0c740d0afc3bff0a097ad03a1c8df92757516f5c)
Signed-off-by: Sri Krishna chowdary <schowdary@nvidia.com>
Change-Id: Id689cb1383ceba2561b66188d88258619b68f5c6
Reviewed-on: http://git-master/r/419041
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
Bug 200004122
Conflicts:
drivers/cpufreq/cpufreq.c
drivers/regulator/core.c
sound/soc/codecs/max98090.c
Change-Id: I9418a05ad5c56b2e902249218bac2fa594d99f56
Signed-off-by: Ishan Mittal <imittal@nvidia.com>
|
|
into promotion_build
|
|
Bug 200004122
Bug 1511804
This merge takes AOSP commits from android-3.10 branch
Change-Id: I07ec2468114db0366d63777142c9572bbfadbc45
Signed-off-by: Ishan Mittal <imittal@nvidia.com>
|
|
Send out the notifiers if the enabled flag has been changed. This will
ensure that the notifier fires in all cases.
Bug 1516219
Change-Id: Ie97422f61a9ad56f0ce194a99ce69193d429eadc
Signed-off-by: Sai Gurrappadi <sgurrappadi@nvidia.com>
Reviewed-on: http://git-master/r/413433
Reviewed-by: Automatic_Commit_Validation_User
Tested-by: Diwakar Tundlam <dtundlam@nvidia.com>
Reviewed-by: Diwakar Tundlam <dtundlam@nvidia.com>
|
|
Ensure system time is read from rtc i.e. PMIC RTC
in case of tegra after system resume.
- Change is applicable to all PMIC RTC drivers.
bug 1510843
Reviewed-on: http://git-master/r/413365/
Reviewed-on: http://git-master/r/414163/
Reviewed-on: http://git-master/r/414169/
Reviewed-on: http://git-master/r/414200/
Reviewed-on: http://git-master/r/414201/
Change-Id: I8194f9f4be581216d662e6a6e0d7e98a9495159c
Signed-off-by: Bitan Biswas <bbiswas@nvidia.com>
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/414203
GVS: Gerrit_Virtual_Submit
|
|
Use cpu_relaxed_read and cpu_read_relax to allow more architectures to be able
to "relax" instead of busy spinning.
Bug 1440421
Change-Id: I48e36d7b3c953fe43ebb23ea814de7738c91e394
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/412712
Reviewed-by: Sumit Singh <sumsingh@nvidia.com>
GVS: Gerrit_Virtual_Submit
|
|
During the integration of this change to 3.10, this error check was
commented out.
Signed-off-by: Dan Willemsen <dwillemsen@nvidia.com>
Change-Id: I7e2fa114b0efbe58134f7acc60ec4d18b1bbee0e
Reviewed-on: http://git-master/r/409994
Reviewed-by: Thomas Cherry <tcherry@nvidia.com>
|
|
Add trace events for the entry point and exit point of all system
calls. This makes it possible to account for the time spent in each
system call.
Change-Id: Iba88c0a38424ea8e986954e23f13f4417578379d
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/407691
GVS: Gerrit_Virtual_Submit
|
|
Add a trace envent for the end of a context switch. The makes it
possible to account for the time spent context switching using the
event log.
Change-Id: Idc7bcc47cf51d5e2200e2f9380ba0100e1e56884
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/407689
GVS: Gerrit_Virtual_Submit
|
|
commit 80df28476505ed4e6701c3448c63c9229a50c655 upstream.
As sysctl_hung_task_timeout_sec is unsigned long, when this value is
larger then LONG_MAX/HZ, the function schedule_timeout_interruptible in
watchdog will return immediately without sleep and with print :
schedule_timeout: wrong timeout value ffffffffffffff83
and then the funtion watchdog will call schedule_timeout_interruptible
again and again. The screen will be filled with
"schedule_timeout: wrong timeout value ffffffffffffff83"
This patch does some check and correction in sysctl, to let the function
schedule_timeout_interruptible allways get the valid parameter.
Signed-off-by: Liu Hua <sdu.liu@huawei.com>
Tested-by: Satoru Takeuchi <satoru.takeuchi@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
* ARM64 Upstream changes: (113 commits)
Bug 1494135
Change-Id: I2e7c054d5bd6f702115f0697693298480d83e0b4
Signed-off-by: Ishan Mittal <imittal@nvidia.com>
|
|
We've switched over every architecture that supports SMP to it, so
remove the new useless config variable.
Conflicts:
arch/arm/Kconfig
block/blk-mq.c
Change-Id: Ic19c3ac07a38a1636d6aa2fed5e55a58833f9b2c
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 0a06ff068f1255bcd7965ab07bc0f4adc3eb639a)
Signed-off-by: Ishan Mittal <imittal@nvidia.com>
|
|
Conflicts:
arch/x86/ia32/ia32_signal.c
Change-Id: I701cb58fffb756a2d29e7c828728d149c35e6c6a
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
(cherry picked from commit ce3959604878c1c693979ec552069dc8bdb5ccde)
Signed-off-by: Ishan Mittal <imittal@nvidia.com>
|
|
Bug 1498965
Change-Id: I353b383acd5ae32338ef38468787644fc9eafe48
Signed-off-by: Yifei Wan <ywan@nvidia.com>
Reviewed-on: http://git-master/r/400929
Reviewed-by: Mandar Padmawar <mpadmawar@nvidia.com>
Tested-by: Mandar Padmawar <mpadmawar@nvidia.com>
|
|
commit c39df5fa37b0623589508c95515b4aa1531c524e upstream.
Commit 8aac62706ada ("move exit_task_namespaces() outside of
exit_notify()") breaks pppd and the exiting service crashes the kernel:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000028
IP: ppp_register_channel+0x13/0x20 [ppp_generic]
Call Trace:
ppp_asynctty_open+0x12b/0x170 [ppp_async]
tty_ldisc_open.isra.2+0x27/0x60
tty_ldisc_hangup+0x1e3/0x220
__tty_hangup+0x2c4/0x440
disassociate_ctty+0x61/0x270
do_exit+0x7f2/0xa50
ppp_register_channel() needs ->net_ns and current->nsproxy == NULL.
Move disassociate_ctty() before exit_task_namespaces(), it doesn't make
sense to delay it after perf_event_exit_task() or cgroup_exit().
This also allows to use task_work_add() inside the (nontrivial) code
paths in disassociate_ctty().
Investigated by Peter Hurley.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reported-by: Sree Harsha Totakura <sreeharsha@totakura.in>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Sree Harsha Totakura <sreeharsha@totakura.in>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Andrey Vagin <avagin@openvz.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit dfccbb5e49a621c1b21a62527d61fc4305617aca upstream.
wait_task_zombie() first does EXIT_ZOMBIE->EXIT_DEAD transition and
drops tasklist_lock. If this task is not the natural child and it is
traced, we change its state back to EXIT_ZOMBIE for ->real_parent.
The last transition is racy, this is even documented in 50b8d257486a
"ptrace: partially fix the do_wait(WEXITED) vs EXIT_DEAD->EXIT_ZOMBIE
race". wait_consider_task() tries to detect this transition and clear
->notask_error but we can't rely on ptrace_reparented(), debugger can
exit and do ptrace_unlink() before its sub-thread sets EXIT_ZOMBIE.
And there is another problem which were missed before: this transition
can also race with reparent_leader() which doesn't reset >exit_signal if
EXIT_DEAD, assuming that this task must be reaped by someone else. So
the tracee can be re-parented with ->exit_signal != SIGCHLD, and if
/sbin/init doesn't use __WALL it becomes unreapable.
Change reparent_leader() to update ->exit_signal even if EXIT_DEAD.
Note: this is the simple temporary hack for -stable, it doesn't try to
solve all problems, it will be reverted by the next changes.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reported-by: Jan Kratochvil <jan.kratochvil@redhat.com>
Reported-by: Michal Schmidt <mschmidt@redhat.com>
Tested-by: Michal Schmidt <mschmidt@redhat.com>
Cc: Al Viro <viro@ZenIV.linux.org.uk>
Cc: Lennart Poettering <lpoetter@redhat.com>
Cc: Roland McGrath <roland@hack.frob.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit d23082257d83e4bc89727d5aedee197e907999d2 upstream.
pidns_get()->get_pid_ns() can hit ns == NULL. This task_struct can't
go away, but task_active_pid_ns(task) is NULL if release_task(task)
was already called. Alternatively we could change get_pid_ns(ns) to
check ns != NULL, but it seems that other callers are fine.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: Eric W. Biederman ebiederm@xmission.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit e79323bd87808fdfbc68ce6c5371bd224d9672ee upstream.
smp_read_barrier_depends() can be used if there is data dependency between
the readers - i.e. if the read operation after the barrier uses address
that was obtained from the read operation before the barrier.
In this file, there is only control dependency, no data dependecy, so the
use of smp_read_barrier_depends() is incorrect. The code could fail in the
following way:
* the cpu predicts that idx < entries is true and starts executing the
body of the for loop
* the cpu fetches map->extent[0].first and map->extent[0].count
* the cpu fetches map->nr_extents
* the cpu verifies that idx < extents is true, so it commits the
instructions in the body of the for loop
The problem is that in this scenario, the cpu read map->extent[0].first
and map->nr_extents in the wrong order. We need a full read memory barrier
to prevent it.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
/sys/kernel/wakeup_reasons/last_resume_reason
Change-Id: I8ac568a7cb58c31decd379195de517ff3c6f9c65
Signed-off-by: Ruchi Kandoi <kandoiruchi@google.com>
|
|
thread.
Second argument is similar to PR_SET_TIMERSLACK, if non-zero then the
slack is set to that value otherwise sets it to the default for the thread.
Takes PID of the thread as the third argument.
This allows power/performance management software to set timer slack for
other threads according to its policy for the thread (such as when the
thread is designated foreground vs. background activity)
Change-Id: I744d451ff4e60dae69f38f53948ff36c51c14a3f
Signed-off-by: Ruchi Kandoi <kandoiruchi@google.com>
|
|
commit 03b8c7b623c80af264c4c8d6111e5c6289933666 upstream.
If an architecture has futex_atomic_cmpxchg_inatomic() implemented and there
is no runtime check necessary, allow to skip the test within futex_init().
This allows to get rid of some code which would always give the same result,
and also allows the compiler to optimize a couple of if statements away.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Finn Thain <fthain@telegraphics.com.au>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Link: http://lkml.kernel.org/r/20140302120947.GA3641@osiris
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[geert: Backported to v3.10..v3.13]
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
NOHZ related per-cpu data is cleared on CPU down. This was introduced by
4b0c0f294 "tick: Cleanup NOHZ per cpu data on cpu down" which breaks
/proc/stats because the idle and iowait sums are now non-monotonic
across a CPU down/up cycle.
Fix this by not clearing the idle_sleeptime and iowait_sleeptime fields
on CPU down.
Bug 1484092
Change-Id: Ia57f5690672c0f35cf36098f92f242b94b545bb7
Signed-off-by: Sai Gurrappadi <sgurrappadi@nvidia.com>
Reviewed-on: http://git-master/r/386664
Reviewed-by: Diwakar Tundlam <dtundlam@nvidia.com>
|
|
commit 41261b6a832ea0e788627f6a8707854423f9ff49 upstream.
In autogroup_create(), a tg is allocated and added to the task_groups
list. If CONFIG_RT_GROUP_SCHED is set, this tg is then modified while on
the list, without locking. This can race with someone walking the list,
like __enable_runtime() during CPU unplug, and result in a use-after-free
bug.
To fix this, move sched_online_group(), which adds the tg to the list,
to the end of the autogroup_create() function after the modification.
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1369411669-46971-2-git-send-email-gerald.schaefer@de.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 87291347c49dc40aa339f587b209618201c2e527 upstream.
In event format strings, the array size is reported in two locations.
One in array subscript and then via the "size:" attribute. The values
reported there have a mismatch.
For e.g., in sched:sched_switch the prev_comm and next_comm character
arrays have subscript values as [32] where as the actual field size is
16.
name: sched_switch
ID: 301
format:
field:unsigned short common_type; offset:0; size:2; signed:0;
field:unsigned char common_flags; offset:2; size:1; signed:0;
field:unsigned char common_preempt_count; offset:3; size:1;signed:0;
field:int common_pid; offset:4; size:4; signed:1;
field:char prev_comm[32]; offset:8; size:16; signed:1;
field:pid_t prev_pid; offset:24; size:4; signed:1;
field:int prev_prio; offset:28; size:4; signed:1;
field:long prev_state; offset:32; size:8; signed:1;
field:char next_comm[32]; offset:40; size:16; signed:1;
field:pid_t next_pid; offset:56; size:4; signed:1;
field:int next_prio; offset:60; size:4; signed:1;
After bisection, the following commit was blamed:
92edca0 tracing: Use direct field, type and system names
This commit removes the duplication of strings for field->name and
field->type assuming that all the strings passed in
__trace_define_field() are immutable. This is not true for arrays, where
the type string is created in event_storage variable and field->type for
all array fields points to event_storage.
Use __stringify() to create a string constant for the type string.
Also, get rid of event_storage and event_storage_mutex that are not
needed anymore.
also, an added benefit is that this reduces the overhead of events a bit more:
text data bss dec hex filename
8424787 2036472 1302528 11763787 b3804b vmlinux
8420814 2036408 1302528 11759750 b37086 vmlinux.patched
Link: http://lkml.kernel.org/r/1392349908-29685-1-git-send-email-vnagarnaik@google.com
Cc: Laurent Chavey <chavey@google.com>
Signed-off-by: Vaibhav Nagarnaik <vnagarnaik@google.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Using macros cpu_relaxed_read, cpu_relaxed_read_long and
cpu_read_relax in core.c to improve power efficiency.
Bug 1440421
Change-Id: Ic49b88fa6fb0cb847fed49886b2865bed72203db
Signed-off-by: Sumit Singh <sumsingh@nvidia.com>
Reviewed-on: http://git-master/r/368841
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
|
|
commit c9b5a266b103af873abb9ac03bc3d067702c8f4b upstream.
In periodic mode we remove offline cpus from the broadcast propagation
mask. In oneshot mode we fail to do so. This was not a problem so far,
but the recent changes to the broadcast propagation introduced a
constellation which can result in a NULL pointer dereference.
What happens is:
CPU0 CPU1
idle()
arch_idle()
tick_broadcast_oneshot_control(OFF);
set cpu1 in tick_broadcast_force_mask
if (cpu_offline())
arch_cpu_dead()
cpu_dead_cleanup(cpu1)
cpu1 tickdevice pointer = NULL
broadcast interrupt
dereference cpu1 tickdevice pointer -> OOPS
We dereference the pointer because cpu1 is still set in
tick_broadcast_force_mask and tick_do_broadcast() expects a valid
cpumask and therefor lacks any further checks.
Remove the cpu from the tick_broadcast_force_mask before we set the
tick device pointer to NULL. Also add a sanity check to the oneshot
broadcast function, so we can detect such issues w/o crashing the
machine.
Reported-by: Prarit Bhargava <prarit@redhat.com>
Cc: athorlton@sgi.com
Cc: CAI Qian <caiqian@redhat.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1306261303260.4013@ionos.tec.linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 45ab2813d40d88fc575e753c38478de242d03f88 upstream.
If a module fails to add its tracepoints due to module tainting, do not
create the module event infrastructure in the debugfs directory. As the events
will not work and worse yet, they will silently fail, making the user wonder
why the events they enable do not display anything.
Having a warning on module load and the events not visible to the users
will make the cause of the problem much clearer.
Link: http://lkml.kernel.org/r/20140227154923.265882695@goodmis.org
Fixes: 6d723736e472 "tracing/events: add support for modules to TRACE_EVENT"
Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 99afb0fd5f05aac467ffa85c36778fec4396209b upstream.
It's not safe to access task's cpuset after releasing task_lock().
Holding callback_mutex won't help.
Signed-off-by: Li Zefan <lizefan@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit c685689fd24d310343ac33942e9a54a974ae9c43 upstream.
We hit one rare case below:
T1 calling disable_irq(), but hanging at synchronize_irq()
always;
The corresponding irq thread is in sleeping state;
And all CPUs are in idle state;
After analysis, we found there is one possible scenerio which
causes T1 is waiting there forever:
CPU0 CPU1
synchronize_irq()
wait_event()
spin_lock()
atomic_dec_and_test(&threads_active)
insert the __wait into queue
spin_unlock()
if(waitqueue_active)
atomic_read(&threads_active)
wake_up()
Here after inserted the __wait into queue on CPU0, and before
test if queue is empty on CPU1, there is no barrier, it maybe
cause it is not visible for CPU1 immediately, although CPU0 has
updated the queue list.
It is similar for CPU0 atomic_read() threads_active also.
So we'd need one smp_mb() before waitqueue_active.that, but removing
the waitqueue_active() check solves it as wel l and it makes
things simple and clear.
Signed-off-by: Chuansheng Liu <chuansheng.liu@intel.com>
Cc: Xiaoming Wang <xiaoming.wang@intel.com>
Link: http://lkml.kernel.org/r/1393212590-32543-1-git-send-email-chuansheng.liu@intel.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 791c9e0292671a3bfa95286bb5c08129d8605618 upstream.
dequeue_entity() is called when p->on_rq and sets se->on_rq = 0
which appears to guarentee that the !se->on_rq condition is met.
If the task has done set_current_state(TASK_INTERRUPTIBLE) without
schedule() the second condition will be met and vruntime will be
incorrectly adjusted twice.
In certain cases this can result in the task's vruntime never increasing
past the vruntime of other tasks on the CFS' run queue, starving them of
CPU time.
This patch changes switched_from_fair() to use !p->on_rq instead of
!se->on_rq.
I'm able to cause a task with a priority of 120 to starve all other
tasks with the same priority on an ARM platform running 3.2.51-rt72
PREEMPT RT by writing one character at time to a serial tty (16550 UART)
in a tight loop. I'm also able to verify making this change corrects the
problem on that platform and kernel version.
Signed-off-by: George McCollister <george.mccollister@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1392767811-28916-1-git-send-email-george.mccollister@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
This reverts commit dc893e19b5800d7743fb58235877bfa9091805ff.
Bug 1481993
Change-Id: I147913ac71e89cba13a5a99a751e142c1d3a61da
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: http://git-master/r/382701
Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com>
|
|
Bug 1456092
Change-Id: I3021247ec68a3c2dddd9e98cde13d70a45191d53
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
|
|
On x86, irq_count conflicts with a declaration in
arch/x86/include/asm/processor.h
Change-Id: I3e4fde0ff64ef59ff5ed2adc0ea3a644641ee0b7
Signed-off-by: Greg Hackmann <ghackmann@google.com>
|
|
Ensure the array for the wakeup reason IRQs does not overflow.
Change-Id: Iddc57a3aeb1888f39d4e7b004164611803a4d37c
Signed-off-by: Ruchi Kandoi <kandoiruchi@google.com>
(cherry picked from commit b5ea40cdfcf38296535f931a7e5e7bf47b6fad7f)
|
|
commit e3703f8cdfcf39c25c4338c3ad8e68891cca3731 upstream.
Drew Richardson reported that he could make the kernel go *boom* when hotplugging
while having perf events active.
It turned out that when you have a group event, the code in
__perf_event_exit_context() fails to remove the group siblings from
the context.
We then proceed with destroying and freeing the event, and when you
re-plug the CPU and try and add another event to that CPU, things go
*boom* because you've still got dead entries there.
Reported-by: Drew Richardson <drew.richardson@arm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/n/tip-k6v5wundvusvcseqj1si0oz0@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 5bdfff96c69a4d5ab9c49e60abf9e070ecd2acbb upstream.
When a kworker should die, the kworkre is notified through WORKER_DIE
flag instead of kthread_should_stop(). This, IIRC, is primarily to
keep the test synchronized inside worker_pool lock. WORKER_DIE is
first set while holding pool->lock, the lock is dropped and
kthread_stop() is called.
Unfortunately, this means that there's a slight chance that the target
kworker may see WORKER_DIE before kthread_stop() finishes and exits
and frees the target task before or during kthread_stop().
Fix it by pinning the target task before setting WORKER_DIE and
putting it after kthread_stop() is done.
tj: Improved patch description and comment. Moved pinning above
WORKER_DIE for better signify what it's protecting.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
If a task sched_yields to itself continuously, force the task to
sleep in sched_yield. This will lower the CPU load of this task
thereby lowering the cpu frequency and improving power.
Added a stat variable to track how many times we sleep due these
consecutive sched_yields. Also added sysctl knobs to control the number
of consecutive sched_yields before which the sleep kicks in and the
duration fo the sleep in us.
Bug 1424617
Change-Id: Ie92412b8b900365816e17237fcbd0aac6e9c94ce
Signed-off-by: Sai Gurrappadi <sgurrappadi@nvidia.com>
Reviewed-on: http://git-master/r/358455
Reviewed-by: Wen Yi <wyi@nvidia.com>
Reviewed-by: Peter Zu <pzu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Diwakar Tundlam <dtundlam@nvidia.com>
|
|
commit 2c45aada341121438affc4cb8d5b4cfaa2813d3d upstream.
In allmodconfig builds for sparc and any other arch which does
not set CONFIG_SPARSE_IRQ, the following will be seen at modpost:
CC [M] lib/cpu-notifier-error-inject.o
CC [M] lib/pm-notifier-error-inject.o
ERROR: "irq_to_desc" [drivers/gpio/gpio-mcp23s08.ko] undefined!
make[2]: *** [__modpost] Error 1
This happens because commit 3911ff30f5 ("genirq: export
handle_edge_irq() and irq_to_desc()") added one export for it, but
there were actually two instances of it, in an if/else clause for
CONFIG_SPARSE_IRQ. Add the second one.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Link: http://lkml.kernel.org/r/1392057610-11514-1-git-send-email-paul.gortmaker@windriver.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit d651aa1d68a2f0a7ee65697b04c6a92f8c0a12f2 upstream.
Each sub-buffer (buffer page) has a full 64 bit timestamp. The events on
that page use a 27 bit delta against that timestamp in order to save on
bits written to the ring buffer. If the time between events is larger than
what the 27 bits can hold, a "time extend" event is added to hold the
entire 64 bit timestamp again and the events after that hold a delta from
that timestamp.
As a "time extend" is always paired with an event, it is logical to just
allocate the event with the time extend, to make things a bit more efficient.
Unfortunately, when the pairing code was written, it removed the "delta = 0"
from the first commit on a page, causing the events on the page to be
slightly skewed.
Fixes: 69d1b839f7ee "ring-buffer: Bind time extend and data events together"
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 80d767d770fd9c697e434fd080c2db7b5c60c6dd upstream.
When compiling for the IA-64 ski emulator, HZ is set to 32 because the
emulation is slow and we don't want to waste too many cycles processing
timers. Alpha also has an option to set HZ to 32.
This causes integer underflow in
kernel/time/jiffies.c:
kernel/time/jiffies.c:66:2: warning: large integer implicitly truncated to unsigned type [-Woverflow]
.mult = NSEC_PER_JIFFY << JIFFIES_SHIFT, /* details above */
^
This patch reduces the JIFFIES_SHIFT value to avoid the overflow.
Signed-off-by: Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
Link: http://lkml.kernel.org/r/alpine.LRH.2.02.1401241639100.23871@file01.intranet.prod.int.rdu2.redhat.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit dd5fd9b91a77b4c9c28b7ef9c181b1a875820d0a upstream.
AMD systems which use the C1E workaround in the amd_e400_idle routine
trigger the WARN_ON_ONCE in the broadcast code when onlining a CPU.
The reason is that the idle routine of those AMD systems switches the
cpu into forced broadcast mode early on before the newly brought up
CPU can switch over to high resolution / NOHZ mode. The timer related
CPU1 bringup looks like this:
clockevent_register_device(local_apic);
tick_setup(local_apic);
...
idle()
tick_broadcast_on_off(FORCE);
tick_broadcast_oneshot_control(ENTER)
cpumask_set(cpu, broadcast_oneshot_mask);
halt();
Now the broadcast interrupt on CPU0 sets CPU1 in the
broadcast_pending_mask and wakes CPU1. So CPU1 continues:
local_apic_timer_interrupt()
tick_handle_periodic();
softirq()
tick_init_highres();
cpumask_clr(cpu, broadcast_oneshot_mask);
tick_broadcast_oneshot_control(ENTER)
WARN_ON(cpumask_test(cpu, broadcast_pending_mask);
So while we remove CPU1 from the broadcast_oneshot_mask when we switch
over to highres mode, we do not clear the pending bit, which then
triggers the warning when we go back to idle.
The reason why this is only visible on C1E affected AMD systems is
that the other machines enter the deep sleep states via
acpi_idle/intel_idle and exit the broadcast mode before executing the
remote triggered local_apic_timer_interrupt. So the pending bit is
already cleared when the switch over to highres mode is clearing the
oneshot mask.
The solution is simple: Clear the pending bit together with the mask
bit when we switch over to highres mode.
Stanislaw came up independently with the same patch by enforcing the
C1E workaround and debugging the fallout. I picked mine, because mine
has a changelog :)
Reported-by: poma <pomidorabelisima@gmail.com>
Debugged-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Olaf Hering <olaf@aepfle.de>
Cc: Dave Jones <davej@redhat.com>
Cc: Justin M. Forbes <jforbes@redhat.com>
Cc: Josh Boyer <jwboyer@redhat.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1402111434180.21991@ionos.tec.linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Change I81addaf420f1338255c5d0638b0d244a99d777d1 introduced compile
warnings, fix these.
Change-Id: I05482a5335599ab96c0a088a7d175c8d4cf1cf69
Signed-off-by: Ruchi Kandoi <kandoiruchi@google.com>
|
|
Add API log_wakeup_reason() and expose it to userspace via sysfs path
/sys/kernel/wakeup_reasons/last_resume_reason
Change-Id: I81addaf420f1338255c5d0638b0d244a99d777d1
Signed-off-by: Ruchi Kandoi <kandoiruchi@google.com>
|