<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git, branch v3.8.8</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>Linux 3.8.8</title>
<updated>2013-04-17T05:11:28+00:00</updated>
<author>
<name>Greg Kroah-Hartman</name>
<email>gregkh@linuxfoundation.org</email>
</author>
<published>2013-04-17T05:11:28+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=2396403a0402caf7b9decbc5d206fa63ba62b6b7'/>
<id>2396403a0402caf7b9decbc5d206fa63ba62b6b7</id>
<content type='text'>
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
</pre>
</div>
</content>
</entry>
<entry>
<title>tty: don't deadlock while flushing workqueue</title>
<updated>2013-04-17T04:48:30+00:00</updated>
<author>
<name>Sebastian Andrzej Siewior</name>
<email>bigeasy@linutronix.de</email>
</author>
<published>2012-12-25T22:02:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=77d0ca8b5d7e7cd48005ddb79b05c79082a68bb5'/>
<id>77d0ca8b5d7e7cd48005ddb79b05c79082a68bb5</id>
<content type='text'>
commit 852e4a8152b427c3f318bb0e1b5e938d64dcdc32 upstream.

Since commit 89c8d91e31f2 ("tty: localise the lock") I see a dead lock
in one of my dummy_hcd + g_nokia test cases. The first run was usually
okay, the second often resulted in a splat by lockdep and the third was
usually a dead lock.
Lockdep complained about tty-&gt;hangup_work and tty-&gt;legacy_mutex taken
both ways:
| ======================================================
| [ INFO: possible circular locking dependency detected ]
| 3.7.0-rc6+ #204 Not tainted
| -------------------------------------------------------
| kworker/2:1/35 is trying to acquire lock:
|  (&amp;tty-&gt;legacy_mutex){+.+.+.}, at: [&lt;c14051e6&gt;] tty_lock_nested+0x36/0x80
|
| but task is already holding lock:
|  ((&amp;tty-&gt;hangup_work)){+.+...}, at: [&lt;c104f6e4&gt;] process_one_work+0x124/0x5e0
|
| which lock already depends on the new lock.
|
| the existing dependency chain (in reverse order) is:
|
| -&gt; #2 ((&amp;tty-&gt;hangup_work)){+.+...}:
|        [&lt;c107fe74&gt;] lock_acquire+0x84/0x190
|        [&lt;c104d82d&gt;] flush_work+0x3d/0x240
|        [&lt;c12e6986&gt;] tty_ldisc_flush_works+0x16/0x30
|        [&lt;c12e7861&gt;] tty_ldisc_release+0x21/0x70
|        [&lt;c12e0dfc&gt;] tty_release+0x35c/0x470
|        [&lt;c1105e28&gt;] __fput+0xd8/0x270
|        [&lt;c1105fcd&gt;] ____fput+0xd/0x10
|        [&lt;c1051dd9&gt;] task_work_run+0xb9/0xf0
|        [&lt;c1002a51&gt;] do_notify_resume+0x51/0x80
|        [&lt;c140550a&gt;] work_notifysig+0x35/0x3b
|
| -&gt; #1 (&amp;tty-&gt;legacy_mutex/1){+.+...}:
|        [&lt;c107fe74&gt;] lock_acquire+0x84/0x190
|        [&lt;c140276c&gt;] mutex_lock_nested+0x6c/0x2f0
|        [&lt;c14051e6&gt;] tty_lock_nested+0x36/0x80
|        [&lt;c1405279&gt;] tty_lock_pair+0x29/0x70
|        [&lt;c12e0bb8&gt;] tty_release+0x118/0x470
|        [&lt;c1105e28&gt;] __fput+0xd8/0x270
|        [&lt;c1105fcd&gt;] ____fput+0xd/0x10
|        [&lt;c1051dd9&gt;] task_work_run+0xb9/0xf0
|        [&lt;c1002a51&gt;] do_notify_resume+0x51/0x80
|        [&lt;c140550a&gt;] work_notifysig+0x35/0x3b
|
| -&gt; #0 (&amp;tty-&gt;legacy_mutex){+.+.+.}:
|        [&lt;c107f3c9&gt;] __lock_acquire+0x1189/0x16a0
|        [&lt;c107fe74&gt;] lock_acquire+0x84/0x190
|        [&lt;c140276c&gt;] mutex_lock_nested+0x6c/0x2f0
|        [&lt;c14051e6&gt;] tty_lock_nested+0x36/0x80
|        [&lt;c140523f&gt;] tty_lock+0xf/0x20
|        [&lt;c12df8e4&gt;] __tty_hangup+0x54/0x410
|        [&lt;c12dfcb2&gt;] do_tty_hangup+0x12/0x20
|        [&lt;c104f763&gt;] process_one_work+0x1a3/0x5e0
|        [&lt;c104fec9&gt;] worker_thread+0x119/0x3a0
|        [&lt;c1055084&gt;] kthread+0x94/0xa0
|        [&lt;c140ca37&gt;] ret_from_kernel_thread+0x1b/0x28
|
|other info that might help us debug this:
|
|Chain exists of:
|  &amp;tty-&gt;legacy_mutex --&gt; &amp;tty-&gt;legacy_mutex/1 --&gt; (&amp;tty-&gt;hangup_work)
|
| Possible unsafe locking scenario:
|
|       CPU0                    CPU1
|       ----                    ----
|  lock((&amp;tty-&gt;hangup_work));
|                               lock(&amp;tty-&gt;legacy_mutex/1);
|                               lock((&amp;tty-&gt;hangup_work));
|  lock(&amp;tty-&gt;legacy_mutex);
|
| *** DEADLOCK ***

Before the path mentioned tty_ldisc_release() look like this:

|	tty_ldisc_halt(tty);
|	tty_ldisc_flush_works(tty);
|	tty_lock();

As it can be seen, it first flushes the workqueue and then grabs the
tty_lock. Now we grab the lock first:

|	tty_lock_pair(tty, o_tty);
|	tty_ldisc_halt(tty);
|	tty_ldisc_flush_works(tty);

so lockdep's complaint seems valid.

The earlier version of this patch took the ldisc_mutex since the other
user of tty_ldisc_flush_works() (tty_set_ldisc()) did this.
Peter Hurley then said that it is should not be requried. Since it
wasn't done earlier, I dropped this part.
The code under tty_ldisc_kill() was executed earlier with the tty lock
taken so it is taken again.

I was able to reproduce the deadlock on v3.8-rc1, this patch fixes the
problem in my testcase. I didn't notice any problems so far.

Signed-off-by: Sebastian Andrzej Siewior &lt;bigeasy@linutronix.de&gt;
Cc: Alan Cox &lt;alan@linux.intel.com&gt;
Cc: Peter Hurley &lt;peter@hurleysoftware.com&gt;
Cc: Bryan O'Donoghue &lt;bryan.odonoghue.lkml@nexus-software.ie&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 852e4a8152b427c3f318bb0e1b5e938d64dcdc32 upstream.

Since commit 89c8d91e31f2 ("tty: localise the lock") I see a dead lock
in one of my dummy_hcd + g_nokia test cases. The first run was usually
okay, the second often resulted in a splat by lockdep and the third was
usually a dead lock.
Lockdep complained about tty-&gt;hangup_work and tty-&gt;legacy_mutex taken
both ways:
| ======================================================
| [ INFO: possible circular locking dependency detected ]
| 3.7.0-rc6+ #204 Not tainted
| -------------------------------------------------------
| kworker/2:1/35 is trying to acquire lock:
|  (&amp;tty-&gt;legacy_mutex){+.+.+.}, at: [&lt;c14051e6&gt;] tty_lock_nested+0x36/0x80
|
| but task is already holding lock:
|  ((&amp;tty-&gt;hangup_work)){+.+...}, at: [&lt;c104f6e4&gt;] process_one_work+0x124/0x5e0
|
| which lock already depends on the new lock.
|
| the existing dependency chain (in reverse order) is:
|
| -&gt; #2 ((&amp;tty-&gt;hangup_work)){+.+...}:
|        [&lt;c107fe74&gt;] lock_acquire+0x84/0x190
|        [&lt;c104d82d&gt;] flush_work+0x3d/0x240
|        [&lt;c12e6986&gt;] tty_ldisc_flush_works+0x16/0x30
|        [&lt;c12e7861&gt;] tty_ldisc_release+0x21/0x70
|        [&lt;c12e0dfc&gt;] tty_release+0x35c/0x470
|        [&lt;c1105e28&gt;] __fput+0xd8/0x270
|        [&lt;c1105fcd&gt;] ____fput+0xd/0x10
|        [&lt;c1051dd9&gt;] task_work_run+0xb9/0xf0
|        [&lt;c1002a51&gt;] do_notify_resume+0x51/0x80
|        [&lt;c140550a&gt;] work_notifysig+0x35/0x3b
|
| -&gt; #1 (&amp;tty-&gt;legacy_mutex/1){+.+...}:
|        [&lt;c107fe74&gt;] lock_acquire+0x84/0x190
|        [&lt;c140276c&gt;] mutex_lock_nested+0x6c/0x2f0
|        [&lt;c14051e6&gt;] tty_lock_nested+0x36/0x80
|        [&lt;c1405279&gt;] tty_lock_pair+0x29/0x70
|        [&lt;c12e0bb8&gt;] tty_release+0x118/0x470
|        [&lt;c1105e28&gt;] __fput+0xd8/0x270
|        [&lt;c1105fcd&gt;] ____fput+0xd/0x10
|        [&lt;c1051dd9&gt;] task_work_run+0xb9/0xf0
|        [&lt;c1002a51&gt;] do_notify_resume+0x51/0x80
|        [&lt;c140550a&gt;] work_notifysig+0x35/0x3b
|
| -&gt; #0 (&amp;tty-&gt;legacy_mutex){+.+.+.}:
|        [&lt;c107f3c9&gt;] __lock_acquire+0x1189/0x16a0
|        [&lt;c107fe74&gt;] lock_acquire+0x84/0x190
|        [&lt;c140276c&gt;] mutex_lock_nested+0x6c/0x2f0
|        [&lt;c14051e6&gt;] tty_lock_nested+0x36/0x80
|        [&lt;c140523f&gt;] tty_lock+0xf/0x20
|        [&lt;c12df8e4&gt;] __tty_hangup+0x54/0x410
|        [&lt;c12dfcb2&gt;] do_tty_hangup+0x12/0x20
|        [&lt;c104f763&gt;] process_one_work+0x1a3/0x5e0
|        [&lt;c104fec9&gt;] worker_thread+0x119/0x3a0
|        [&lt;c1055084&gt;] kthread+0x94/0xa0
|        [&lt;c140ca37&gt;] ret_from_kernel_thread+0x1b/0x28
|
|other info that might help us debug this:
|
|Chain exists of:
|  &amp;tty-&gt;legacy_mutex --&gt; &amp;tty-&gt;legacy_mutex/1 --&gt; (&amp;tty-&gt;hangup_work)
|
| Possible unsafe locking scenario:
|
|       CPU0                    CPU1
|       ----                    ----
|  lock((&amp;tty-&gt;hangup_work));
|                               lock(&amp;tty-&gt;legacy_mutex/1);
|                               lock((&amp;tty-&gt;hangup_work));
|  lock(&amp;tty-&gt;legacy_mutex);
|
| *** DEADLOCK ***

Before the path mentioned tty_ldisc_release() look like this:

|	tty_ldisc_halt(tty);
|	tty_ldisc_flush_works(tty);
|	tty_lock();

As it can be seen, it first flushes the workqueue and then grabs the
tty_lock. Now we grab the lock first:

|	tty_lock_pair(tty, o_tty);
|	tty_ldisc_halt(tty);
|	tty_ldisc_flush_works(tty);

so lockdep's complaint seems valid.

The earlier version of this patch took the ldisc_mutex since the other
user of tty_ldisc_flush_works() (tty_set_ldisc()) did this.
Peter Hurley then said that it is should not be requried. Since it
wasn't done earlier, I dropped this part.
The code under tty_ldisc_kill() was executed earlier with the tty lock
taken so it is taken again.

I was able to reproduce the deadlock on v3.8-rc1, this patch fixes the
problem in my testcase. I didn't notice any problems so far.

Signed-off-by: Sebastian Andrzej Siewior &lt;bigeasy@linutronix.de&gt;
Cc: Alan Cox &lt;alan@linux.intel.com&gt;
Cc: Peter Hurley &lt;peter@hurleysoftware.com&gt;
Cc: Bryan O'Donoghue &lt;bryan.odonoghue.lkml@nexus-software.ie&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>x86, mm: Patch out arch_flush_lazy_mmu_mode() when running on bare metal</title>
<updated>2013-04-17T04:48:29+00:00</updated>
<author>
<name>Boris Ostrovsky</name>
<email>boris.ostrovsky@oracle.com</email>
</author>
<published>2013-03-23T13:36:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=00c33275b78637fc5423ce05dd5e886836544eb1'/>
<id>00c33275b78637fc5423ce05dd5e886836544eb1</id>
<content type='text'>
commit 511ba86e1d386f671084b5d0e6f110bb30b8eeb2 upstream.

Invoking arch_flush_lazy_mmu_mode() results in calls to
preempt_enable()/disable() which may have performance impact.

Since lazy MMU is not used on bare metal we can patch away
arch_flush_lazy_mmu_mode() so that it is never called in such
environment.

[ hpa: the previous patch "Fix vmalloc_fault oops during lazy MMU
  updates" may cause a minor performance regression on
  bare metal.  This patch resolves that performance regression.  It is
  somewhat unclear to me if this is a good -stable candidate. ]

Signed-off-by: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Link: http://lkml.kernel.org/r/1364045796-10720-2-git-send-email-konrad.wilk@oracle.com
Tested-by: Josh Boyer &lt;jwboyer@redhat.com&gt;
Tested-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Acked-by: Borislav Petkov &lt;bp@suse.de&gt;
Signed-off-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Signed-off-by: H. Peter Anvin &lt;hpa@linux.intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 511ba86e1d386f671084b5d0e6f110bb30b8eeb2 upstream.

Invoking arch_flush_lazy_mmu_mode() results in calls to
preempt_enable()/disable() which may have performance impact.

Since lazy MMU is not used on bare metal we can patch away
arch_flush_lazy_mmu_mode() so that it is never called in such
environment.

[ hpa: the previous patch "Fix vmalloc_fault oops during lazy MMU
  updates" may cause a minor performance regression on
  bare metal.  This patch resolves that performance regression.  It is
  somewhat unclear to me if this is a good -stable candidate. ]

Signed-off-by: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Link: http://lkml.kernel.org/r/1364045796-10720-2-git-send-email-konrad.wilk@oracle.com
Tested-by: Josh Boyer &lt;jwboyer@redhat.com&gt;
Tested-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Acked-by: Borislav Petkov &lt;bp@suse.de&gt;
Signed-off-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Signed-off-by: H. Peter Anvin &lt;hpa@linux.intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>x86, mm, paravirt: Fix vmalloc_fault oops during lazy MMU updates</title>
<updated>2013-04-17T04:48:29+00:00</updated>
<author>
<name>Samu Kallio</name>
<email>samu.kallio@aberdeencloud.com</email>
</author>
<published>2013-03-23T13:36:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=a1df5f936a0aea88cbc41555e473c89f1384a64d'/>
<id>a1df5f936a0aea88cbc41555e473c89f1384a64d</id>
<content type='text'>
commit 1160c2779b826c6f5c08e5cc542de58fd1f667d5 upstream.

In paravirtualized x86_64 kernels, vmalloc_fault may cause an oops
when lazy MMU updates are enabled, because set_pgd effects are being
deferred.

One instance of this problem is during process mm cleanup with memory
cgroups enabled. The chain of events is as follows:

- zap_pte_range enables lazy MMU updates
- zap_pte_range eventually calls mem_cgroup_charge_statistics,
  which accesses the vmalloc'd mem_cgroup per-cpu stat area
- vmalloc_fault is triggered which tries to sync the corresponding
  PGD entry with set_pgd, but the update is deferred
- vmalloc_fault oopses due to a mismatch in the PUD entries

The OOPs usually looks as so:

------------[ cut here ]------------
kernel BUG at arch/x86/mm/fault.c:396!
invalid opcode: 0000 [#1] SMP
.. snip ..
CPU 1
Pid: 10866, comm: httpd Not tainted 3.6.10-4.fc18.x86_64 #1
RIP: e030:[&lt;ffffffff816271bf&gt;]  [&lt;ffffffff816271bf&gt;] vmalloc_fault+0x11f/0x208
.. snip ..
Call Trace:
 [&lt;ffffffff81627759&gt;] do_page_fault+0x399/0x4b0
 [&lt;ffffffff81004f4c&gt;] ? xen_mc_extend_args+0xec/0x110
 [&lt;ffffffff81624065&gt;] page_fault+0x25/0x30
 [&lt;ffffffff81184d03&gt;] ? mem_cgroup_charge_statistics.isra.13+0x13/0x50
 [&lt;ffffffff81186f78&gt;] __mem_cgroup_uncharge_common+0xd8/0x350
 [&lt;ffffffff8118aac7&gt;] mem_cgroup_uncharge_page+0x57/0x60
 [&lt;ffffffff8115fbc0&gt;] page_remove_rmap+0xe0/0x150
 [&lt;ffffffff8115311a&gt;] ? vm_normal_page+0x1a/0x80
 [&lt;ffffffff81153e61&gt;] unmap_single_vma+0x531/0x870
 [&lt;ffffffff81154962&gt;] unmap_vmas+0x52/0xa0
 [&lt;ffffffff81007442&gt;] ? pte_mfn_to_pfn+0x72/0x100
 [&lt;ffffffff8115c8f8&gt;] exit_mmap+0x98/0x170
 [&lt;ffffffff810050d9&gt;] ? __raw_callee_save_xen_pmd_val+0x11/0x1e
 [&lt;ffffffff81059ce3&gt;] mmput+0x83/0xf0
 [&lt;ffffffff810624c4&gt;] exit_mm+0x104/0x130
 [&lt;ffffffff8106264a&gt;] do_exit+0x15a/0x8c0
 [&lt;ffffffff810630ff&gt;] do_group_exit+0x3f/0xa0
 [&lt;ffffffff81063177&gt;] sys_exit_group+0x17/0x20
 [&lt;ffffffff8162bae9&gt;] system_call_fastpath+0x16/0x1b

Calling arch_flush_lazy_mmu_mode immediately after set_pgd makes the
changes visible to the consistency checks.

RedHat-Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=914737
Tested-by: Josh Boyer &lt;jwboyer@redhat.com&gt;
Reported-and-Tested-by: Krishna Raman &lt;kraman@redhat.com&gt;
Signed-off-by: Samu Kallio &lt;samu.kallio@aberdeencloud.com&gt;
Link: http://lkml.kernel.org/r/1364045796-10720-1-git-send-email-konrad.wilk@oracle.com
Tested-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Signed-off-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Signed-off-by: H. Peter Anvin &lt;hpa@linux.intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 1160c2779b826c6f5c08e5cc542de58fd1f667d5 upstream.

In paravirtualized x86_64 kernels, vmalloc_fault may cause an oops
when lazy MMU updates are enabled, because set_pgd effects are being
deferred.

One instance of this problem is during process mm cleanup with memory
cgroups enabled. The chain of events is as follows:

- zap_pte_range enables lazy MMU updates
- zap_pte_range eventually calls mem_cgroup_charge_statistics,
  which accesses the vmalloc'd mem_cgroup per-cpu stat area
- vmalloc_fault is triggered which tries to sync the corresponding
  PGD entry with set_pgd, but the update is deferred
- vmalloc_fault oopses due to a mismatch in the PUD entries

The OOPs usually looks as so:

------------[ cut here ]------------
kernel BUG at arch/x86/mm/fault.c:396!
invalid opcode: 0000 [#1] SMP
.. snip ..
CPU 1
Pid: 10866, comm: httpd Not tainted 3.6.10-4.fc18.x86_64 #1
RIP: e030:[&lt;ffffffff816271bf&gt;]  [&lt;ffffffff816271bf&gt;] vmalloc_fault+0x11f/0x208
.. snip ..
Call Trace:
 [&lt;ffffffff81627759&gt;] do_page_fault+0x399/0x4b0
 [&lt;ffffffff81004f4c&gt;] ? xen_mc_extend_args+0xec/0x110
 [&lt;ffffffff81624065&gt;] page_fault+0x25/0x30
 [&lt;ffffffff81184d03&gt;] ? mem_cgroup_charge_statistics.isra.13+0x13/0x50
 [&lt;ffffffff81186f78&gt;] __mem_cgroup_uncharge_common+0xd8/0x350
 [&lt;ffffffff8118aac7&gt;] mem_cgroup_uncharge_page+0x57/0x60
 [&lt;ffffffff8115fbc0&gt;] page_remove_rmap+0xe0/0x150
 [&lt;ffffffff8115311a&gt;] ? vm_normal_page+0x1a/0x80
 [&lt;ffffffff81153e61&gt;] unmap_single_vma+0x531/0x870
 [&lt;ffffffff81154962&gt;] unmap_vmas+0x52/0xa0
 [&lt;ffffffff81007442&gt;] ? pte_mfn_to_pfn+0x72/0x100
 [&lt;ffffffff8115c8f8&gt;] exit_mmap+0x98/0x170
 [&lt;ffffffff810050d9&gt;] ? __raw_callee_save_xen_pmd_val+0x11/0x1e
 [&lt;ffffffff81059ce3&gt;] mmput+0x83/0xf0
 [&lt;ffffffff810624c4&gt;] exit_mm+0x104/0x130
 [&lt;ffffffff8106264a&gt;] do_exit+0x15a/0x8c0
 [&lt;ffffffff810630ff&gt;] do_group_exit+0x3f/0xa0
 [&lt;ffffffff81063177&gt;] sys_exit_group+0x17/0x20
 [&lt;ffffffff8162bae9&gt;] system_call_fastpath+0x16/0x1b

Calling arch_flush_lazy_mmu_mode immediately after set_pgd makes the
changes visible to the consistency checks.

RedHat-Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=914737
Tested-by: Josh Boyer &lt;jwboyer@redhat.com&gt;
Reported-and-Tested-by: Krishna Raman &lt;kraman@redhat.com&gt;
Signed-off-by: Samu Kallio &lt;samu.kallio@aberdeencloud.com&gt;
Link: http://lkml.kernel.org/r/1364045796-10720-1-git-send-email-konrad.wilk@oracle.com
Tested-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Signed-off-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Signed-off-by: H. Peter Anvin &lt;hpa@linux.intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>sched_clock: Prevent 64bit inatomicity on 32bit systems</title>
<updated>2013-04-17T04:48:29+00:00</updated>
<author>
<name>Thomas Gleixner</name>
<email>tglx@linutronix.de</email>
</author>
<published>2013-04-06T08:10:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=092c48c05b0b25fdfca630441f34fd99b83deb1c'/>
<id>092c48c05b0b25fdfca630441f34fd99b83deb1c</id>
<content type='text'>
commit a1cbcaa9ea87b87a96b9fc465951dcf36e459ca2 upstream.

The sched_clock_remote() implementation has the following inatomicity
problem on 32bit systems when accessing the remote scd-&gt;clock, which
is a 64bit value.

CPU0			CPU1

sched_clock_local()	sched_clock_remote(CPU0)
...
			remote_clock = scd[CPU0]-&gt;clock
			    read_low32bit(scd[CPU0]-&gt;clock)
cmpxchg64(scd-&gt;clock,...)
			    read_high32bit(scd[CPU0]-&gt;clock)

While the update of scd-&gt;clock is using an atomic64 mechanism, the
readout on the remote cpu is not, which can cause completely bogus
readouts.

It is a quite rare problem, because it requires the update to hit the
narrow race window between the low/high readout and the update must go
across the 32bit boundary.

The resulting misbehaviour is, that CPU1 will see the sched_clock on
CPU1 ~4 seconds ahead of it's own and update CPU1s sched_clock value
to this bogus timestamp. This stays that way due to the clamping
implementation for about 4 seconds until the synchronization with
CLOCK_MONOTONIC undoes the problem.

The issue is hard to observe, because it might only result in a less
accurate SCHED_OTHER timeslicing behaviour. To create observable
damage on realtime scheduling classes, it is necessary that the bogus
update of CPU1 sched_clock happens in the context of an realtime
thread, which then gets charged 4 seconds of RT runtime, which results
in the RT throttler mechanism to trigger and prevent scheduling of RT
tasks for a little less than 4 seconds. So this is quite unlikely as
well.

The issue was quite hard to decode as the reproduction time is between
2 days and 3 weeks and intrusive tracing makes it less likely, but the
following trace recorded with trace_clock=global, which uses
sched_clock_local(), gave the final hint:

  &lt;idle&gt;-0   0d..30 400269.477150: hrtimer_cancel: hrtimer=0xf7061e80
  &lt;idle&gt;-0   0d..30 400269.477151: hrtimer_start:  hrtimer=0xf7061e80 ...
irq/20-S-587 1d..32 400273.772118: sched_wakeup:   comm= ... target_cpu=0
  &lt;idle&gt;-0   0dN.30 400273.772118: hrtimer_cancel: hrtimer=0xf7061e80

What happens is that CPU0 goes idle and invokes
sched_clock_idle_sleep_event() which invokes sched_clock_local() and
CPU1 runs a remote wakeup for CPU0 at the same time, which invokes
sched_remote_clock(). The time jump gets propagated to CPU0 via
sched_remote_clock() and stays stale on both cores for ~4 seconds.

There are only two other possibilities, which could cause a stale
sched clock:

1) ktime_get() which reads out CLOCK_MONOTONIC returns a sporadic
   wrong value.

2) sched_clock() which reads the TSC returns a sporadic wrong value.

#1 can be excluded because sched_clock would continue to increase for
   one jiffy and then go stale.

#2 can be excluded because it would not make the clock jump
   forward. It would just result in a stale sched_clock for one jiffy.

After quite some brain twisting and finding the same pattern on other
traces, sched_clock_remote() remained the only place which could cause
such a problem and as explained above it's indeed racy on 32bit
systems.

So while on 64bit systems the readout is atomic, we need to verify the
remote readout on 32bit machines. We need to protect the local-&gt;clock
readout in sched_clock_remote() on 32bit as well because an NMI could
hit between the low and the high readout, call sched_clock_local() and
modify local-&gt;clock.

Thanks to Siegfried Wulsch for bearing with my debug requests and
going through the tedious tasks of running a bunch of reproducer
systems to generate the debug information which let me decode the
issue.

Reported-by: Siegfried Wulsch &lt;Siegfried.Wulsch@rovema.de&gt;
Acked-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1304051544160.21884@ionos
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit a1cbcaa9ea87b87a96b9fc465951dcf36e459ca2 upstream.

The sched_clock_remote() implementation has the following inatomicity
problem on 32bit systems when accessing the remote scd-&gt;clock, which
is a 64bit value.

CPU0			CPU1

sched_clock_local()	sched_clock_remote(CPU0)
...
			remote_clock = scd[CPU0]-&gt;clock
			    read_low32bit(scd[CPU0]-&gt;clock)
cmpxchg64(scd-&gt;clock,...)
			    read_high32bit(scd[CPU0]-&gt;clock)

While the update of scd-&gt;clock is using an atomic64 mechanism, the
readout on the remote cpu is not, which can cause completely bogus
readouts.

It is a quite rare problem, because it requires the update to hit the
narrow race window between the low/high readout and the update must go
across the 32bit boundary.

The resulting misbehaviour is, that CPU1 will see the sched_clock on
CPU1 ~4 seconds ahead of it's own and update CPU1s sched_clock value
to this bogus timestamp. This stays that way due to the clamping
implementation for about 4 seconds until the synchronization with
CLOCK_MONOTONIC undoes the problem.

The issue is hard to observe, because it might only result in a less
accurate SCHED_OTHER timeslicing behaviour. To create observable
damage on realtime scheduling classes, it is necessary that the bogus
update of CPU1 sched_clock happens in the context of an realtime
thread, which then gets charged 4 seconds of RT runtime, which results
in the RT throttler mechanism to trigger and prevent scheduling of RT
tasks for a little less than 4 seconds. So this is quite unlikely as
well.

The issue was quite hard to decode as the reproduction time is between
2 days and 3 weeks and intrusive tracing makes it less likely, but the
following trace recorded with trace_clock=global, which uses
sched_clock_local(), gave the final hint:

  &lt;idle&gt;-0   0d..30 400269.477150: hrtimer_cancel: hrtimer=0xf7061e80
  &lt;idle&gt;-0   0d..30 400269.477151: hrtimer_start:  hrtimer=0xf7061e80 ...
irq/20-S-587 1d..32 400273.772118: sched_wakeup:   comm= ... target_cpu=0
  &lt;idle&gt;-0   0dN.30 400273.772118: hrtimer_cancel: hrtimer=0xf7061e80

What happens is that CPU0 goes idle and invokes
sched_clock_idle_sleep_event() which invokes sched_clock_local() and
CPU1 runs a remote wakeup for CPU0 at the same time, which invokes
sched_remote_clock(). The time jump gets propagated to CPU0 via
sched_remote_clock() and stays stale on both cores for ~4 seconds.

There are only two other possibilities, which could cause a stale
sched clock:

1) ktime_get() which reads out CLOCK_MONOTONIC returns a sporadic
   wrong value.

2) sched_clock() which reads the TSC returns a sporadic wrong value.

#1 can be excluded because sched_clock would continue to increase for
   one jiffy and then go stale.

#2 can be excluded because it would not make the clock jump
   forward. It would just result in a stale sched_clock for one jiffy.

After quite some brain twisting and finding the same pattern on other
traces, sched_clock_remote() remained the only place which could cause
such a problem and as explained above it's indeed racy on 32bit
systems.

So while on 64bit systems the readout is atomic, we need to verify the
remote readout on 32bit machines. We need to protect the local-&gt;clock
readout in sched_clock_remote() on 32bit as well because an NMI could
hit between the low and the high readout, call sched_clock_local() and
modify local-&gt;clock.

Thanks to Siegfried Wulsch for bearing with my debug requests and
going through the tedious tasks of running a bunch of reproducer
systems to generate the debug information which let me decode the
issue.

Reported-by: Siegfried Wulsch &lt;Siegfried.Wulsch@rovema.de&gt;
Acked-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1304051544160.21884@ionos
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>ftrace: Move ftrace_filter_lseek out of CONFIG_DYNAMIC_FTRACE section</title>
<updated>2013-04-17T04:48:29+00:00</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2013-04-12T20:40:13+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=103d7cb05682db5a57e40b96f2029be91171c112'/>
<id>103d7cb05682db5a57e40b96f2029be91171c112</id>
<content type='text'>
commit 7f49ef69db6bbf756c0abca7e9b65b32e999eec8 upstream.

As ftrace_filter_lseek is now used with ftrace_pid_fops, it needs to
be moved out of the #ifdef CONFIG_DYNAMIC_FTRACE section as the
ftrace_pid_fops is defined when DYNAMIC_FTRACE is not.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Namhyung Kim &lt;namhyung@kernel.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 7f49ef69db6bbf756c0abca7e9b65b32e999eec8 upstream.

As ftrace_filter_lseek is now used with ftrace_pid_fops, it needs to
be moved out of the #ifdef CONFIG_DYNAMIC_FTRACE section as the
ftrace_pid_fops is defined when DYNAMIC_FTRACE is not.

Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Namhyung Kim &lt;namhyung@kernel.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>udl: handle EDID failure properly.</title>
<updated>2013-04-17T04:48:29+00:00</updated>
<author>
<name>Dave Airlie</name>
<email>airlied@gmail.com</email>
</author>
<published>2013-04-12T03:25:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=69bcccfef278d0b58b684fbd3ead0c52e4384d53'/>
<id>69bcccfef278d0b58b684fbd3ead0c52e4384d53</id>
<content type='text'>
commit 1baee58638fc58248625255f5c5fcdb987f11b1f upstream.

Don't oops seems proper.

Signed-off-by: Dave Airlie &lt;airlied@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 1baee58638fc58248625255f5c5fcdb987f11b1f upstream.

Don't oops seems proper.

Signed-off-by: Dave Airlie &lt;airlied@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>tracing: Fix possible NULL pointer dereferences</title>
<updated>2013-04-17T04:48:29+00:00</updated>
<author>
<name>Namhyung Kim</name>
<email>namhyung.kim@lge.com</email>
</author>
<published>2013-04-11T06:55:01+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=bea692a5004128c029112632c546e320d71fe99d'/>
<id>bea692a5004128c029112632c546e320d71fe99d</id>
<content type='text'>
commit 6a76f8c0ab19f215af2a3442870eeb5f0e81998d upstream.

Currently set_ftrace_pid and set_graph_function files use seq_lseek
for their fops.  However seq_open() is called only for FMODE_READ in
the fops-&gt;open() so that if an user tries to seek one of those file
when she open it for writing, it sees NULL seq_file and then panic.

It can be easily reproduced with following command:

  $ cd /sys/kernel/debug/tracing
  $ echo 1234 | sudo tee -a set_ftrace_pid

In this example, GNU coreutils' tee opens the file with fopen(, "a")
and then the fopen() internally calls lseek().

Link: http://lkml.kernel.org/r/1365663302-2170-1-git-send-email-namhyung@kernel.org

Signed-off-by: Namhyung Kim &lt;namhyung@kernel.org&gt;
Cc: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Namhyung Kim &lt;namhyung.kim@lge.com&gt;
Cc: stable@vger.kernel.org
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 6a76f8c0ab19f215af2a3442870eeb5f0e81998d upstream.

Currently set_ftrace_pid and set_graph_function files use seq_lseek
for their fops.  However seq_open() is called only for FMODE_READ in
the fops-&gt;open() so that if an user tries to seek one of those file
when she open it for writing, it sees NULL seq_file and then panic.

It can be easily reproduced with following command:

  $ cd /sys/kernel/debug/tracing
  $ echo 1234 | sudo tee -a set_ftrace_pid

In this example, GNU coreutils' tee opens the file with fopen(, "a")
and then the fopen() internally calls lseek().

Link: http://lkml.kernel.org/r/1365663302-2170-1-git-send-email-namhyung@kernel.org

Signed-off-by: Namhyung Kim &lt;namhyung@kernel.org&gt;
Cc: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Namhyung Kim &lt;namhyung.kim@lge.com&gt;
Cc: stable@vger.kernel.org
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>x86-32: Fix possible incomplete TLB invalidate with PAE pagetables</title>
<updated>2013-04-17T04:48:29+00:00</updated>
<author>
<name>Dave Hansen</name>
<email>dave@sr71.net</email>
</author>
<published>2013-04-12T23:23:54+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=324cdc3f7e6a752fe0e95fa7b5c9664171a34ded'/>
<id>324cdc3f7e6a752fe0e95fa7b5c9664171a34ded</id>
<content type='text'>
commit 1de14c3c5cbc9bb17e9dcc648cda51c0c85d54b9 upstream.

This patch attempts to fix:

	https://bugzilla.kernel.org/show_bug.cgi?id=56461

The symptom is a crash and messages like this:

	chrome: Corrupted page table at address 34a03000
	*pdpt = 0000000000000000 *pde = 0000000000000000
	Bad pagetable: 000f [#1] PREEMPT SMP

Ingo guesses this got introduced by commit 611ae8e3f520 ("x86/tlb:
enable tlb flush range support for x86") since that code started to free
unused pagetables.

On x86-32 PAE kernels, that new code has the potential to free an entire
PMD page and will clear one of the four page-directory-pointer-table
(aka pgd_t entries).

The hardware aggressively "caches" these top-level entries and invlpg
does not actually affect the CPU's copy.  If we clear one we *HAVE* to
do a full TLB flush, otherwise we might continue using a freed pmd page.
(note, we do this properly on the population side in pud_populate()).

This patch tracks whenever we clear one of these entries in the 'struct
mmu_gather', and ensures that we follow up with a full tlb flush.

BTW, I disassembled and checked that:

	if (tlb-&gt;fullmm == 0)
and
	if (!tlb-&gt;fullmm &amp;&amp; !tlb-&gt;need_flush_all)

generate essentially the same code, so there should be zero impact there
to the !PAE case.

Signed-off-by: Dave Hansen &lt;dave.hansen@linux.intel.com&gt;
Cc: Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Artem S Tashkinov &lt;t.artem@mailcity.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 1de14c3c5cbc9bb17e9dcc648cda51c0c85d54b9 upstream.

This patch attempts to fix:

	https://bugzilla.kernel.org/show_bug.cgi?id=56461

The symptom is a crash and messages like this:

	chrome: Corrupted page table at address 34a03000
	*pdpt = 0000000000000000 *pde = 0000000000000000
	Bad pagetable: 000f [#1] PREEMPT SMP

Ingo guesses this got introduced by commit 611ae8e3f520 ("x86/tlb:
enable tlb flush range support for x86") since that code started to free
unused pagetables.

On x86-32 PAE kernels, that new code has the potential to free an entire
PMD page and will clear one of the four page-directory-pointer-table
(aka pgd_t entries).

The hardware aggressively "caches" these top-level entries and invlpg
does not actually affect the CPU's copy.  If we clear one we *HAVE* to
do a full TLB flush, otherwise we might continue using a freed pmd page.
(note, we do this properly on the population side in pud_populate()).

This patch tracks whenever we clear one of these entries in the 'struct
mmu_gather', and ensures that we follow up with a full tlb flush.

BTW, I disassembled and checked that:

	if (tlb-&gt;fullmm == 0)
and
	if (!tlb-&gt;fullmm &amp;&amp; !tlb-&gt;need_flush_all)

generate essentially the same code, so there should be zero impact there
to the !PAE case.

Signed-off-by: Dave Hansen &lt;dave.hansen@linux.intel.com&gt;
Cc: Peter Anvin &lt;hpa@zytor.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Artem S Tashkinov &lt;t.artem@mailcity.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>gpio: fix wrong checking condition for gpio range</title>
<updated>2013-04-17T04:48:29+00:00</updated>
<author>
<name>Haojian Zhuang</name>
<email>haojian.zhuang@linaro.org</email>
</author>
<published>2013-02-17T11:42:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=aabc281d18eef3106c9f58c06e466a9ffdf33759'/>
<id>aabc281d18eef3106c9f58c06e466a9ffdf33759</id>
<content type='text'>
commit ad4e1a7caf937ad395ced585ca85a7d14395dc80 upstream.

If index++ calculates from 0, the checking condition of "while
(index++)" fails &amp; it doesn't check any more. It doesn't follow
the loop that used at here.

Replace it by endless loop at here. Then it keeps parsing
"gpio-ranges" property until it ends.

Signed-off-by: Haojian Zhuang &lt;haojian.zhuang@linaro.org&gt;
Reviewed-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Jonghwan Choi &lt;jhbird.choi@samsung.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit ad4e1a7caf937ad395ced585ca85a7d14395dc80 upstream.

If index++ calculates from 0, the checking condition of "while
(index++)" fails &amp; it doesn't check any more. It doesn't follow
the loop that used at here.

Replace it by endless loop at here. Then it keeps parsing
"gpio-ranges" property until it ends.

Signed-off-by: Haojian Zhuang &lt;haojian.zhuang@linaro.org&gt;
Reviewed-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Jonghwan Choi &lt;jhbird.choi@samsung.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
</feed>
