Age | Commit message (Collapse) | Author |
|
gpu2d hang fix has side effect, it causes flicker issue for popup window
menu on android, not integrate this fix before the side effect be fixed.
Revert "ENGR00302036-3 gpu:gpu2d may cause bus hang in some corner case"
This reverts commit 0d5505cfd0b50db4bd046270b119c4dae0a143c2.
Signed-off-by: Richard Liu <r66033@freescale.com>
Acked-by: Shawn Guo
|
|
Vivante patch name:
cl17466.17776.rls.lockup.2dhang(clear.blit)
-Updated the outstanding request limit to 12.
-Refined the 2D chip feature check.
-Refine the 2D cache flush operation
(avoid FE and PE access memory through the same port).
-Enable cache flush for filterblt.
-Dynamic enabling SPLIT_RECT by checking chip feature(disable for us)
-Use brush stretch blt for clear operation.
Date: Mar 26, 2014
Signed-off-by: Loren Huang <b02279@freescale.com>
Acked-by: Shawn Guo
|
|
These are Category B, hence workaround is essential.
Signed-off-by: Nitin Garg <nitin.garg@freescale.com>
|
|
drivers/usb/host/ehci-arc.c: In function 'ehci_fsl_drv_resume':
drivers/usb/host/ehci-arc.c:805: warning: ISO C90 forbids mixed declarations and code
Signed-off-by: Peter Chen <peter.chen@freescale.com>
|
|
This issue happens when multiple thread is trying to idle GPU at the
same time, root cause is some wrong logic related with powerMutex which
cause cpu still access GPU AHB register after GPU is suspend(clock off),
that cause the bus lockup and make the whole system hang.
Signed-off-by: Richard Liu <r66033@freescale.com>
Acked-by: Jason Liu
|
|
We find a bug that the usb will produce endless interrupt during the
resume after below steps:
- nothing connected at usb otg port, but enable usb gadget driver
- system enters suspend
- plug in usb cable
- system leaves suspend
The reasons of endless interrupt is: the host controller resume routine
leaves the phy low power mode, but device controller resume routine has
still not been called, so the software considers it is still at low power
mode. After this time, the vbus change interrupt comes, the wakeup routine
considers it is an wakeup interrupt, but in fact it is not. So, the endless
interrupt occurs.
The solution of this problem is do noop during host controller resume
routine if current usb mode is not host mode, and the device controller
reusme routine will handle it.
Signed-off-by: Peter Chen <peter.chen@freescale.com>
|
|
Machine driver will swith sysclk to MCLK when hw_free(). When switch from
headphone -> HDMI -> headphone through hotplug headphone in Android, in
HDMI->headphone, the dapm will call POST_PMD -> PRE_PMU, then call hw_params().
So in sysclk_enent(), the FLL_ENA will not be set, because currently sysclk is
MCLK. then audio is blocked. because hw_params() set the sysclk to FLL, but FLL
is not enabled.
So update the sysclk_enent(), the FLL_ENA will be set always when audio enabled.
Signed-off-by: Shengjiu Wang <b02247@freescale.com>
|
|
Set the correct permissions of the fsl_disp_property sysfs entry.
Acked-by: Liu Ying <Ying.Liu@freescale.com>
Signed-off-by: Julien Olivain <julien.olivain@freescale.com>
(cherry picked from commit 9db2d982c90b6edb3dfebe03a68f50f381bb99bb)
|
|
When hotplug, user will close and reopen the device, shutdown()
will be called, MCLK will be disabled, then I2C access to wm8962
will have issue. so the drive didn't switch to headphone.
Move the clk enable and disable to codec driver to fix this issue.
Signed-off-by: Shengjiu Wang <b02247@freescale.com>
|
|
In order to avoid any random pixelation issue, we should follow a correct
display controller/LDB initialization sequence, that is, LDB channel should
be enabled after its relevant display controller is enabled. Since the
enable() callback of the mxc display driver is called after a display
controller is enabled, this patch uses the enable()/disable() callbacks
to replace the fb notifier callback implemented in the LDB driver. Also,
we disable a LDB channel in the setup() callback of the mxc display driver
to make sure of the correct sequence in any fb set_par() function. As the
enable()/disable() callbacks are always invoked with fb_blank(), this
patch drops the suspend()/resume() functions of the LDB driver which
do nothing but enable/disable LDB channels. We also remove the code to
enable ldb di clocks because they can be enabled when their relevant
child pixel clocks are enabled.
Conflicts:
drivers/video/mxc/ldb.c
drivers/video/mxc/mxc_dispdrv.h
Signed-off-by: Liu Ying <Ying.Liu@freescale.com>
(cherry picked from commit 8d189a9cc067f74cc3d49757af25118b459b9726)
|
|
Enable UBIFS selinux setting
Signed-off-by: Ke Qinghua <qinghua.ke@freescale.com>
|
|
2G above address will cause system reboot and fixed in original patch,
error check code is added based on the original logic.
Signed-off-by: Xianzhong <b07117@freescale.com>
Acked-by: Jason Liu
|
|
[net-next commit c26d6b46da3ee86fa8a864347331e5513ca84c2b]
If we don't need scope id, we should initialize it to zero.
Same for ->sin6_flowinfo.
Change-Id: I28e4bc9593e76fc3434052182466fab4bb8ccf3a
Cc: Lorenzo Colitti <lorenzo@google.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Cong Wang <amwang@redhat.com>
Acked-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
[net-next commit fbfe80c890a1dc521d0b629b870e32fcffff0da5]
ping_v6_sendmsg currently returns 0 on success. It should return
the number of bytes written instead.
Bug: 9469865
Change-Id: I82b7d3a37ba91ad24e6dbd97a4880745ce16ad31
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
[net-next commit a1bdc45580fc19e968b32ad27cd7e476a4aa58f6]
Bug: 9469865
Change-Id: I480f8ce95956dd8f17fbbb26dc60cc162f8ec933
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
[backport of net-next 6d0bfe22611602f36617bc7aa2ffa1bbb2f54c67]
This adds the ability to send ICMPv6 echo requests without a
raw socket. The equivalent ability for ICMPv4 was added in
2011.
Instead of having separate code paths for IPv4 and IPv6, make
most of the code in net/ipv4/ping.c dual-stack and only add a
few IPv6-specific bits (like the protocol definition) to a new
net/ipv6/ping.c. Hopefully this will reduce divergence and/or
duplication of bugs in the future.
Caveats:
- Setting options via ancillary data (e.g., using IPV6_PKTINFO
to specify the outgoing interface) is not yet supported.
- There are no separate security settings for IPv4 and IPv6;
everything is controlled by /proc/net/ipv4/ping_group_range.
- The proc interface does not yet display IPv6 ping sockets
properly.
Tested with a patched copy of ping6 and using raw socket calls.
Compiles and works with all of CONFIG_IPV6={n,m,y}.
Change-Id: Ia359af556021344fc7f890c21383aadf950b6498
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
[lorenzo@google.com: backported to 3.0]
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
|
|
[ Upstream commit b531ed61a2a2a77eeb2f7c88b49aa5ec7d9880d8 ]
We should get 'type' and 'code' from the outer ICMP header.
Signed-off-by: Li Wei <lw@cn.fujitsu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
functions
[net-next commit b7ef213ef65256168df83ddfbb8131ed9adc10f9]
__ipv6_addr_needs_scope_id checks if an ipv6 address needs to supply
a 'sin6_scope_id != 0'. 'sin6_scope_id != 0' was enforced in case
of link-local addresses. To support interface-local multicast these
checks had to be enhanced and are now consolidated into these new helper
functions.
v2:
a) migrated to struct ipv6_addr_props
v3:
a) reverted changes for ipv6_addr_props
b) test for address type instead of comparing scope
v4:
a) unchanged
Change-Id: Id6fc54cec61f967928e08a9eba4f857157d973a3
Suggested-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Cc: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Acked-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
[ Upstream commit 7ebe183c6d444ef5587d803b64a1f4734b18c564 ]
On SACK reneging the sender immediately retransmits and forces a
timeout but disables Eifel (undo). If the (buggy) receiver does not
drop any packet this can trigger a false slow-start retransmit storm
driven by the ACKs of the original packets. This can be detected with
undo and TCP timestamps.
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit f4541d60a449afd40448b06496dcd510f505928e ]
A long standing problem with TSO is the fact that tcp_tso_should_defer()
rearms the deferred timer, while it should not.
Current code leads to following bad bursty behavior :
20:11:24.484333 IP A > B: . 297161:316921(19760) ack 1 win 119
20:11:24.484337 IP B > A: . ack 263721 win 1117
20:11:24.485086 IP B > A: . ack 265241 win 1117
20:11:24.485925 IP B > A: . ack 266761 win 1117
20:11:24.486759 IP B > A: . ack 268281 win 1117
20:11:24.487594 IP B > A: . ack 269801 win 1117
20:11:24.488430 IP B > A: . ack 271321 win 1117
20:11:24.489267 IP B > A: . ack 272841 win 1117
20:11:24.490104 IP B > A: . ack 274361 win 1117
20:11:24.490939 IP B > A: . ack 275881 win 1117
20:11:24.491775 IP B > A: . ack 277401 win 1117
20:11:24.491784 IP A > B: . 316921:332881(15960) ack 1 win 119
20:11:24.492620 IP B > A: . ack 278921 win 1117
20:11:24.493448 IP B > A: . ack 280441 win 1117
20:11:24.494286 IP B > A: . ack 281961 win 1117
20:11:24.495122 IP B > A: . ack 283481 win 1117
20:11:24.495958 IP B > A: . ack 285001 win 1117
20:11:24.496791 IP B > A: . ack 286521 win 1117
20:11:24.497628 IP B > A: . ack 288041 win 1117
20:11:24.498459 IP B > A: . ack 289561 win 1117
20:11:24.499296 IP B > A: . ack 291081 win 1117
20:11:24.500133 IP B > A: . ack 292601 win 1117
20:11:24.500970 IP B > A: . ack 294121 win 1117
20:11:24.501388 IP B > A: . ack 295641 win 1117
20:11:24.501398 IP A > B: . 332881:351881(19000) ack 1 win 119
While the expected behavior is more like :
20:19:49.259620 IP A > B: . 197601:202161(4560) ack 1 win 119
20:19:49.260446 IP B > A: . ack 154281 win 1212
20:19:49.261282 IP B > A: . ack 155801 win 1212
20:19:49.262125 IP B > A: . ack 157321 win 1212
20:19:49.262136 IP A > B: . 202161:206721(4560) ack 1 win 119
20:19:49.262958 IP B > A: . ack 158841 win 1212
20:19:49.263795 IP B > A: . ack 160361 win 1212
20:19:49.264628 IP B > A: . ack 161881 win 1212
20:19:49.264637 IP A > B: . 206721:211281(4560) ack 1 win 119
20:19:49.265465 IP B > A: . ack 163401 win 1212
20:19:49.265886 IP B > A: . ack 164921 win 1212
20:19:49.266722 IP B > A: . ack 166441 win 1212
20:19:49.266732 IP A > B: . 211281:215841(4560) ack 1 win 119
20:19:49.267559 IP B > A: . ack 167961 win 1212
20:19:49.268394 IP B > A: . ack 169481 win 1212
20:19:49.269232 IP B > A: . ack 171001 win 1212
20:19:49.269241 IP A > B: . 215841:221161(5320) ack 1 win 119
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Van Jacobson <vanj@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Nandita Dukkipati <nanditad@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit a79ca223e029aa4f09abb337accf1812c900a800 ]
Signed-off-by: Hong Zhiguo <honkiko@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 5a3da1fe9561828d0ca7eca664b16ec2b9bf0055 ]
This patch introduces a constant limit of the fragment queue hash
table bucket list lengths. Currently the limit 128 is choosen somewhat
arbitrary and just ensures that we can fill up the fragment cache with
empty packets up to the default ip_frag_high_thresh limits. It should
just protect from list iteration eating considerable amounts of cpu.
If we reach the maximum length in one hash bucket a warning is printed.
This is implemented on the caller side of inet_frag_find to distinguish
between the different users of inet_fragment.c.
I dropped the out of memory warning in the ipv4 fragment lookup path,
because we already get a warning by the slab allocator.
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Jesper Dangaard Brouer <jbrouer@redhat.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit ddf64354af4a702ee0b85d0a285ba74c7278a460 ]
v2:
a) used struct ipv6_addr_props
v3:
a) reverted changes for ipv6_addr_props
v4:
a) do not use __ipv6_addr_needs_scope_id
Cc: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Setting net.ipv6.conf.<interface>.accept_ra=2 causes the kernel
to accept RAs even when forwarding is enabled. However, enabling
forwarding purges all default routes on the system, breaking
connectivity until the next RA is received. Fix this by not
purging default routes on interfaces that have accept_ra=2.
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Acked-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
[ Upstream commit bd30e947207e2ea0ff2c08f5b4a03025ddce48d3 ]
They will be created at output, if ever needed. This avoids creating
empty neighbor entries when TPROXYing/Forwarding packets for addresses
that are not even directly reachable.
Note that IPv4 already handles it this way. No neighbor entries are
created for local input.
Tested by myself and customer.
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: Marcelo Ricardo Leitner <mleitner@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit c7ac8679bec9397afe8918f788cbcef88c38da54 upstream.
The message size allocated for rtnl ifinfo dumps was limited to
a single page. This is not enough for additional interface info
available with devices that support SR-IOV and caused a bug in
which VF info would not be displayed if more than approximately
40 VFs were created per interface.
Implement a new function pointer for the rtnl_register service that will
calculate the amount of data required for the ifinfo dump and allocate
enough data to satisfy the request.
Signed-off-by: Greg Rose <gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit d4596bad2a713fcd0def492b1960e6d899d5baa8 ]
Cc: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 60713a0ca7fd6651b951cc1b4dbd528d1fc0281b ]
As documented in RFC4861 (Neighbor Discovery for IP version 6) 7.2.6.,
unsolicited neighbour advertisements should be sent to the all-nodes
multicast address.
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 14edd87dc67311556f1254a8f29cf4dd6cb5b7d1 ]
Commit a02e4b7dae4551(Demark default hoplimit as zero) only changes the
hoplimit checking condition and default value in ip6_dst_hoplimit, not
zeros all hoplimit default value.
Keep the zeroing ip6_template_metrics[RTAX_HOPLIMIT - 1] to force it as
const, cause as a37e6e344910(net: force dst_default_metrics to const
section)
Signed-off-by: Li RongQing <roy.qing.li@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 4c67525849e0b7f4bd4fab2487ec9e43ea52ef29 ]
After commit e2446eaa ("tcp_v4_send_reset: binding oif to iif in no
sock case").. tcp resets are always lost, when routing is asymmetric.
Yes, backing out that patch will result in misrouting of resets for
dead connections which used interface binding when were alive, but we
actually cannot do anything here. What's died that's died and correct
handling normal unbound connections is obviously a priority.
Comment to comment:
> This has few benefits:
> 1. tcp_v6_send_reset already did that.
It was done to route resets for IPv6 link local addresses. It was a
mistake to do so for global addresses. The patch fixes this as well.
Actually, the problem appears to be even more serious than guaranteed
loss of resets. As reported by Sergey Soloviev <sol@eqv.ru>, those
misrouted resets create a lot of arp traffic and huge amount of
unresolved arp entires putting down to knees NAT firewalls which use
asymmetric routing.
Signed-off-by: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 96af69ea2a83d292238bdba20e4508ee967cf8cb ]
mip6_mh_filter() should not modify its input, or else its caller
would need to recompute ipv6_hdr() if skb->head is reallocated.
Use skb_header_pointer() instead of pskb_may_pull()
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 1b05c4b50edbddbdde715c4a7350629819f6655e ]
icmpv6_filter() should not modify its input, or else its caller
would need to recompute ipv6_hdr() if skb->head is reallocated.
Use skb_header_pointer() instead of pskb_may_pull() and
change the prototype to make clear both sk and skb are const.
Also, if icmpv6 header cannot be found, do not deliver the packet,
as we do in IPv4.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 6825a26c2dc21eb4f8df9c06d3786ddec97cf53b ]
as we hold dst_entry before we call __ip6_del_rt,
so we should alse call dst_release not only return
-ENOENT when the rt6_info is ip6_null_entry.
and we already hold the dst entry, so I think it's
safe to call dst_release out of the write-read lock.
Signed-off-by: Gao feng <gaofeng@cn.fujitsu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 4acd4945cd1e1f92b20d14e349c6c6a52acbd42d ]
Cong Wang reports that lockdep detected suspicious RCU usage while
enabling IPV6 forwarding:
[ 1123.310275] ===============================
[ 1123.442202] [ INFO: suspicious RCU usage. ]
[ 1123.558207] 3.6.0-rc1+ #109 Not tainted
[ 1123.665204] -------------------------------
[ 1123.768254] include/linux/rcupdate.h:430 Illegal context switch in RCU read-side critical section!
[ 1123.992320]
[ 1123.992320] other info that might help us debug this:
[ 1123.992320]
[ 1124.307382]
[ 1124.307382] rcu_scheduler_active = 1, debug_locks = 0
[ 1124.522220] 2 locks held by sysctl/5710:
[ 1124.648364] #0: (rtnl_mutex){+.+.+.}, at: [<ffffffff81768498>] rtnl_trylock+0x15/0x17
[ 1124.882211] #1: (rcu_read_lock){.+.+.+}, at: [<ffffffff81871df8>] rcu_lock_acquire+0x0/0x29
[ 1125.085209]
[ 1125.085209] stack backtrace:
[ 1125.332213] Pid: 5710, comm: sysctl Not tainted 3.6.0-rc1+ #109
[ 1125.441291] Call Trace:
[ 1125.545281] [<ffffffff8109d915>] lockdep_rcu_suspicious+0x109/0x112
[ 1125.667212] [<ffffffff8107c240>] rcu_preempt_sleep_check+0x45/0x47
[ 1125.781838] [<ffffffff8107c260>] __might_sleep+0x1e/0x19b
[...]
[ 1127.445223] [<ffffffff81757ac5>] call_netdevice_notifiers+0x4a/0x4f
[...]
[ 1127.772188] [<ffffffff8175e125>] dev_disable_lro+0x32/0x6b
[ 1127.885174] [<ffffffff81872d26>] dev_forward_change+0x30/0xcb
[ 1128.013214] [<ffffffff818738c4>] addrconf_forward_change+0x85/0xc5
[...]
addrconf_forward_change() uses RCU iteration over the netdev list,
which is unnecessary since it already holds the RTNL lock. We also
cannot reasonably require netdevice notifier functions not to sleep.
Reported-by: Cong Wang <amwang@redhat.com>
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit d189634ecab947c10f6f832258b103d0bbfe73cc ]
/proc/net/ipv6_route reflects the contents of fib_table_hash. The proc
handler is installed in ip6_route_net_init() whereas fib_table_hash is
allocated in fib6_net_init() _after_ the proc handler has been installed.
This opens up a short time frame to access fib_table_hash with its pants
down.
Move the registration of the proc files to a later point in the init
order to avoid the race.
Tested :-)
Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
It is possible to construct an event group with a software event as a
group leader and then subsequently add a hardware event to the group.
This results in the event group being validated by adding all members
of the group to a fake PMU and attempting to allocate each event on
their respective PMU.
Unfortunately, for software events wthout a corresponding arm_pmu, this
results in a kernel crash attempting to dereference the ->get_event_idx
function pointer.
This patch fixes the problem by checking explicitly for software events
and ignoring those in event validation (since they can always be
scheduled). We will probably want to revisit this for 3.12, since the
validation checks don't appear to work correctly when dealing with
multiple hardware PMUs anyway.
Cc: <stable@vger.kernel.org>
Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Tested-by: Vince Weaver <vincent.weaver@maine.edu>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Conflicts:
arch/arm/kernel/perf_event.c
|
|
Fix constraint check in armpmu_map_hw_event().
Reported-and-tested-by: Vince Weaver <vincent.weaver@maine.edu>
Cc: <stable@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Conflicts:
arch/arm/kernel/perf_event.c
|
|
The {get,put}_user macros don't perform range checking on the provided
__user address when !CPU_HAS_DOMAINS.
This patch reworks the out-of-line assembly accessors to check the user
address against a specified limit, returning -EFAULT if is is out of
range.
[will: changed get_user register allocation to match put_user]
[rmk: fixed building on older ARM architectures]
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Enable SELINUX in configuartion
Signed-off-by: Ke Qinghua <qinghua.ke@freescale.com>
|
|
blk_cleanup_queue() may be called before elevator is set up on a
queue which triggers the following oops.
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<ffffffff8125a69c>] elv_drain_elevator+0x1c/0x70
...
Pid: 830, comm: kworker/0:2 Not tainted 3.1.0-next-20111025_64+ #1590
Bochs Bochs
RIP: 0010:[<ffffffff8125a69c>] [<ffffffff8125a69c>] elv_drain_elevator+0x1c/0x70
...
Call Trace:
[<ffffffff8125da92>] blk_drain_queue+0x42/0x70
[<ffffffff8125db90>] blk_cleanup_queue+0xd0/0x1c0
[<ffffffff81469640>] md_free+0x50/0x70
[<ffffffff8126f43b>] kobject_release+0x8b/0x1d0
[<ffffffff81270d56>] kref_put+0x36/0xa0
[<ffffffff8126f2b7>] kobject_put+0x27/0x60
[<ffffffff814693af>] mddev_delayed_delete+0x2f/0x40
[<ffffffff81083450>] process_one_work+0x100/0x3b0
[<ffffffff8108527f>] worker_thread+0x15f/0x3a0
[<ffffffff81089937>] kthread+0x87/0x90
[<ffffffff81621834>] kernel_thread_helper+0x4/0x10
Fix it by making blk_cleanup_queue() check whether q->elevator is set
up before invoking blk_drain_queue.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-and-tested-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
scsi_run_queue() examines all SCSI devices that are present on
the starved list. Since scsi_run_queue() unlocks the SCSI host
lock a SCSI device can get removed after it has been removed
from the starved list and before its queue is run. Protect
against that race condition by holding a reference on the
queue while running it.
Reported-by: Chanho Min <chanho.min@lge.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
|
|
Based on Bart Van Assche's below patch
Avoid that the code for requeueing SCSI requests triggers a
crash by making sure that that code isn't scheduled anymore
after a device has been removed.
Also, source code inspection of __scsi_remove_device() revealed
a race condition in this function: no new SCSI requests must be
accepted for a SCSI device after device removal started.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Peter Chen <peter.chen@freescale.com>
|
|
properly shutdown
[Peter] Delete the change for block throttle.
request_queue is refcounted but actually depdends on lifetime
management from the queue owner - on blk_cleanup_queue(), block layer
expects that there's no request passing through request_queue and no
new one will.
This is fundamentally broken. The queue owner (e.g. SCSI layer)
doesn't have a way to know whether there are other active users before
calling blk_cleanup_queue() and other users (e.g. bsg) don't have any
guarantee that the queue is and would stay valid while it's holding a
reference.
With delay added in blk_queue_bio() before queue_lock is grabbed, the
following oops can be easily triggered when a device is removed with
in-flight IOs.
sd 0:0:1:0: [sdb] Stopping disk
ata1.01: disabled
general protection fault: 0000 [#1] PREEMPT SMP
CPU 2
Modules linked in:
Pid: 648, comm: test_rawio Not tainted 3.1.0-rc3-work+ #56 Bochs Bochs
RIP: 0010:[<ffffffff8137d651>] [<ffffffff8137d651>] elv_rqhash_find+0x61/0x100
...
Process test_rawio (pid: 648, threadinfo ffff880019efa000, task ffff880019ef8a80)
...
Call Trace:
[<ffffffff8137d774>] elv_merge+0x84/0xe0
[<ffffffff81385b54>] blk_queue_bio+0xf4/0x400
[<ffffffff813838ea>] generic_make_request+0xca/0x100
[<ffffffff81383994>] submit_bio+0x74/0x100
[<ffffffff811c53ec>] dio_bio_submit+0xbc/0xc0
[<ffffffff811c610e>] __blockdev_direct_IO+0x92e/0xb40
[<ffffffff811c39f7>] blkdev_direct_IO+0x57/0x60
[<ffffffff8113b1c5>] generic_file_aio_read+0x6d5/0x760
[<ffffffff8118c1ca>] do_sync_read+0xda/0x120
[<ffffffff8118ce55>] vfs_read+0xc5/0x180
[<ffffffff8118cfaa>] sys_pread64+0x9a/0xb0
[<ffffffff81afaf6b>] system_call_fastpath+0x16/0x1b
This happens because blk_queue_cleanup() destroys the queue and
elevator whether IOs are in progress or not and DEAD tests are
sprinkled in the request processing path without proper
synchronization.
Similar problem exists for blk-throtl. On queue cleanup, blk-throtl
is shutdown whether it has requests in it or not. Depending on
timing, it either oopses or throttled bios are lost putting tasks
which are waiting for bio completion into eternal D state.
The way it should work is having the usual clear distinction between
shutdown and release. Shutdown drains all currently pending requests,
marks the queue dead, and performs partial teardown of the now
unnecessary part of the queue. Even after shutdown is complete,
reference holders are still allowed to issue requests to the queue
although they will be immmediately failed. The rest of teardown
happens on release.
This patch makes the following changes to make blk_queue_cleanup()
behave as proper shutdown.
* QUEUE_FLAG_DEAD is now set while holding both q->exit_mutex and
queue_lock.
* Unsynchronized DEAD check in generic_make_request_checks() removed.
This couldn't make any meaningful difference as the queue could die
after the check.
* blk_drain_queue() updated such that it can drain all requests and is
now called during cleanup.
* blk_throtl updated such that it checks DEAD on grabbing queue_lock,
drains all throttled bios during cleanup and free td when queue is
released.
Signed-off-by: Peter Chen <peter.chen@freescale.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Reorganize queue draining related code in preparation of queue exit
changes.
* Factor out actual draining from elv_quiesce_start() to
blk_drain_queue().
* Make elv_quiesce_start/end() responsible for their own locking.
* Replace open-coded ELVSWITCH clearing in elevator_switch() with
elv_quiesce_end().
This patch doesn't cause any visible functional difference.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Currently get_request[_wait]() allocates request whether queue is dead
or not. This patch makes get_request[_wait]() return NULL if @q is
dead. blk_queue_bio() is updated to fail the submitted bio if request
allocation fails. While at it, add docbook comments for
get_request[_wait]().
Note that the current code has rather unclear (there are spurious DEAD
tests scattered around) assumption that the owner of a queue
guarantees that no request travels block layer if the queue is dead
and this patch in itself doesn't change much; however, this will allow
fixing the broken assumption in the next patch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Hi, Jens,
If you recall, I posted an RFC patch for this back in July of last year:
http://lkml.org/lkml/2010/7/13/279
The basic problem is that a process can issue a never-ending stream of
async direct I/Os to the same sector on a device, thus starving out
other I/O in the system (due to the way the alias handling works in both
cfq and deadline). The solution I proposed back then was to start
dispatching from the fifo after a certain number of aliases had been
dispatched. Vivek asked why we had to treat aliases differently at all,
and I never had a good answer. So, I put together a simple patch which
allows aliases to be added to the rb tree (it adds them to the right,
though that doesn't matter as the order isn't guaranteed anyway). I
think this is the preferred solution, as it doesn't break up time slices
in CFQ or batches in deadline. I've tested it, and it does solve the
starvation issue. Let me know what you think.
Cheers,
Jeff
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
|
|
arm will keep busy to service mma8x5x int, as the int pin does not
config to gpio input.
Signed-off-by: guoyin.chen <guoyin.chen@freescale.com>
|
|
Set 720x576's HS, HW register's value same as other resolutions
Signed-off-by: Fang Hui <b31070@freescale.com>
|
|
There are two possible reasons to cause AXI bus error
1.Allocate Tile status buffer from virtual memory. It seems gc2000
and gc880 doesn't support tile status buffer from virtual memory.
2.Stream buffer using very beginning gpu mmu address. In this condition,
a faked non gpu mmu address maybe generated and fill into gpu which cause
AXI bus error.
[DATE]09-01-2014
Signed-off-by: Loren Huang <b02279@freescale.com>
Acked-by: Shawn Guo
|
|
The return of v2_set_next_event() will lead to an infinite loop in
tick_handle_oneshot_broadcast() - "goto again;" with imx6q WAIT mode
(to be enabled). This happens because when global event did not expire
any CPU local events, the broadcast device will be rearmed to a CPU
local next_event, which could be far away from now and result in a
max_delta_tick programming in set_next_event().
Fix the problem by detecting those next events with increments larger
than 0x7fffffff, and simply return zero in that case.
It leaves mx1_2_set_next_event() unchanged since only v2_set_next_event()
will be running with imx6q WAIT mode support.
Thanks Russell King for helping understand the problem.
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
|