<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/net/core/dev.c, branch v2.6.35.4</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>net: disable preemption before call smp_processor_id()</title>
<updated>2010-08-26T23:46:00+00:00</updated>
<author>
<name>Changli Gao</name>
<email>xiaosuo@gmail.com</email>
</author>
<published>2010-08-08T03:35:43+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=1378008ccdfdfcedbd6503f00e52124bfce1e0f7'/>
<id>1378008ccdfdfcedbd6503f00e52124bfce1e0f7</id>
<content type='text'>
[ Upstream commit cece1945bffcf1a823cdfa36669beae118419351 ]

Although netif_rx() isn't expected to be called in process context with
preemption enabled, it'd better handle this case. And this is why get_cpu()
is used in the non-RPS #ifdef branch. If tree RCU is selected,
rcu_read_lock() won't disable preemption, so preempt_disable() should be
called explictly.

Signed-off-by: Changli Gao &lt;xiaosuo@gmail.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit cece1945bffcf1a823cdfa36669beae118419351 ]

Although netif_rx() isn't expected to be called in process context with
preemption enabled, it'd better handle this case. And this is why get_cpu()
is used in the non-RPS #ifdef branch. If tree RCU is selected,
rcu_read_lock() won't disable preemption, so preempt_disable() should be
called explictly.

Signed-off-by: Changli Gao &lt;xiaosuo@gmail.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>net: Fix a memmove bug in dev_gro_receive()</title>
<updated>2010-08-26T23:46:00+00:00</updated>
<author>
<name>Jarek Poplawski</name>
<email>jarkao2@gmail.com</email>
</author>
<published>2010-08-11T02:02:10+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=4c0ef2f2c0ef4245486124b84ba0ec9a2b4b3899'/>
<id>4c0ef2f2c0ef4245486124b84ba0ec9a2b4b3899</id>
<content type='text'>
[ Upstream commit e5093aec2e6b60c3df2420057ffab9ed4a6d2792 ]

&gt;Xin Xiaohui wrote:
&gt; I looked into the code dev_gro_receive(), found the code here:
&gt; if the frags[0] is pulled to 0, then the page will be released,
&gt; and memmove() frags left.
&gt; Is that right? I'm not sure if memmove do right or not, but
&gt; frags[0].size is never set after memove at least. what I think
&gt; a simple way is not to do anything if we found frags[0].size == 0.
&gt; The patch is as followed.
...

This version of the patch fixes the bug directly in memmove.

Reported-by: "Xin, Xiaohui" &lt;xiaohui.xin@intel.com&gt;
Signed-off-by: Jarek Poplawski &lt;jarkao2@gmail.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit e5093aec2e6b60c3df2420057ffab9ed4a6d2792 ]

&gt;Xin Xiaohui wrote:
&gt; I looked into the code dev_gro_receive(), found the code here:
&gt; if the frags[0] is pulled to 0, then the page will be released,
&gt; and memmove() frags left.
&gt; Is that right? I'm not sure if memmove do right or not, but
&gt; frags[0].size is never set after memove at least. what I think
&gt; a simple way is not to do anything if we found frags[0].size == 0.
&gt; The patch is as followed.
...

This version of the patch fixes the bug directly in memmove.

Reported-by: "Xin, Xiaohui" &lt;xiaohui.xin@intel.com&gt;
Signed-off-by: Jarek Poplawski &lt;jarkao2@gmail.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>net: Fix napi_gro_frags vs netpoll path</title>
<updated>2010-08-26T23:46:00+00:00</updated>
<author>
<name>Jarek Poplawski</name>
<email>jarkao2@gmail.com</email>
</author>
<published>2010-08-05T01:19:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=525127bf8070a869857a66c1c5c6c5324bb335dc'/>
<id>525127bf8070a869857a66c1c5c6c5324bb335dc</id>
<content type='text'>
[ Upstream commit ce9e76c8450fc248d3e1fc16ef05e6eb50c02fa5 ]

The netpoll_rx_on() check in __napi_gro_receive() skips part of the
"common" GRO_NORMAL path, especially "pull:" in dev_gro_receive(),
where at least eth header should be copied for entirely paged skbs.

Signed-off-by: Jarek Poplawski &lt;jarkao2@gmail.com&gt;
Acked-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit ce9e76c8450fc248d3e1fc16ef05e6eb50c02fa5 ]

The netpoll_rx_on() check in __napi_gro_receive() skips part of the
"common" GRO_NORMAL path, especially "pull:" in dev_gro_receive(),
where at least eth header should be copied for entirely paged skbs.

Signed-off-by: Jarek Poplawski &lt;jarkao2@gmail.com&gt;
Acked-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>net: dev_forward_skb should call nf_reset</title>
<updated>2010-07-26T04:58:46+00:00</updated>
<author>
<name>Ben Greear</name>
<email>greearb@candelatech.com</email>
</author>
<published>2010-07-22T09:54:47+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=c736eefadb71a01a5e61e0de700f28f6952b4444'/>
<id>c736eefadb71a01a5e61e0de700f28f6952b4444</id>
<content type='text'>
With conn-track zones and probably with different network
namespaces, the netfilter logic needs to be re-calculated
on packet receive.  If the netfilter logic is not reset,
it will not be recalculated properly.  This patch adds
the nf_reset logic to dev_forward_skb.

Signed-off-by: Ben Greear &lt;greearb@candelatech.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With conn-track zones and probably with different network
namespaces, the netfilter logic needs to be re-calculated
on packet receive.  If the netfilter logic is not reset,
it will not be recalculated properly.  This patch adds
the nf_reset logic to dev_forward_skb.

Signed-off-by: Ben Greear &lt;greearb@candelatech.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>net: fix problem in reading sock TX queue</title>
<updated>2010-07-15T03:50:29+00:00</updated>
<author>
<name>Tom Herbert</name>
<email>therbert@google.com</email>
</author>
<published>2010-07-15T03:50:29+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=b0f77d0eae0c58a5a9691a067ada112ceeae2d00'/>
<id>b0f77d0eae0c58a5a9691a067ada112ceeae2d00</id>
<content type='text'>
Fix problem in reading the tx_queue recorded in a socket.  In
dev_pick_tx, the TX queue is read by doing a check with
sk_tx_queue_recorded on the socket, followed by a sk_tx_queue_get.
The problem is that there is not mutual exclusion across these
calls in the socket so it it is possible that the queue in the
sock can be invalidated after sk_tx_queue_recorded is called so
that sk_tx_queue get returns -1, which sets 65535 in queue_index
and thus dev_pick_tx returns 65536 which is a bogus queue and
can cause crash in dev_queue_xmit.

We fix this by only calling sk_tx_queue_get which does the proper
checks.  The interface is that sk_tx_queue_get returns the TX queue
if the sock argument is non-NULL and TX queue is recorded, else it
returns -1.  sk_tx_queue_recorded is no longer used so it can be
completely removed.

Signed-off-by: Tom Herbert &lt;therbert@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Fix problem in reading the tx_queue recorded in a socket.  In
dev_pick_tx, the TX queue is read by doing a check with
sk_tx_queue_recorded on the socket, followed by a sk_tx_queue_get.
The problem is that there is not mutual exclusion across these
calls in the socket so it it is possible that the queue in the
sock can be invalidated after sk_tx_queue_recorded is called so
that sk_tx_queue get returns -1, which sets 65535 in queue_index
and thus dev_pick_tx returns 65536 which is a bogus queue and
can cause crash in dev_queue_xmit.

We fix this by only calling sk_tx_queue_get which does the proper
checks.  The interface is that sk_tx_queue_get returns the TX queue
if the sock argument is non-NULL and TX queue is recorded, else it
returns -1.  sk_tx_queue_recorded is no longer used so it can be
completely removed.

Signed-off-by: Tom Herbert &lt;therbert@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>net: skb_tx_hash() fix relative to skb_orphan_try()</title>
<updated>2010-07-14T22:33:27+00:00</updated>
<author>
<name>Eric Dumazet</name>
<email>eric.dumazet@gmail.com</email>
</author>
<published>2010-07-13T05:24:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=87fd308cfc6b2e880bf717a740bd5c58d2aed10c'/>
<id>87fd308cfc6b2e880bf717a740bd5c58d2aed10c</id>
<content type='text'>
commit fc6055a5ba31e2 (net: Introduce skb_orphan_try()) added early
orphaning of skbs.

This unfortunately added a performance regression in skb_tx_hash() in
case of stacked devices (bonding, vlans, ...)

Since skb-&gt;sk is now NULL, we cannot access sk-&gt;sk_hash anymore to
spread tx packets to multiple NIC queues on multiqueue devices.

skb_tx_hash() in this case only uses skb-&gt;protocol, same value for all
flows.

skb_orphan_try() can copy sk-&gt;sk_hash into skb-&gt;rxhash and skb_tx_hash()
can use this saved sk_hash value to compute its internal hash value.

Signed-off-by: Eric Dumazet &lt;eric.dumazet@gmail.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit fc6055a5ba31e2 (net: Introduce skb_orphan_try()) added early
orphaning of skbs.

This unfortunately added a performance regression in skb_tx_hash() in
case of stacked devices (bonding, vlans, ...)

Since skb-&gt;sk is now NULL, we cannot access sk-&gt;sk_hash anymore to
spread tx packets to multiple NIC queues on multiqueue devices.

skb_tx_hash() in this case only uses skb-&gt;protocol, same value for all
flows.

skb_orphan_try() can copy sk-&gt;sk_hash into skb-&gt;rxhash and skb_tx_hash()
can use this saved sk_hash value to compute its internal hash value.

Signed-off-by: Eric Dumazet &lt;eric.dumazet@gmail.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>net: decreasing real_num_tx_queues needs to flush qdisc</title>
<updated>2010-07-03T04:59:07+00:00</updated>
<author>
<name>John Fastabend</name>
<email>john.r.fastabend@intel.com</email>
</author>
<published>2010-07-01T13:21:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=f0796d5c73e59786d09a1e617689d1d415f2db44'/>
<id>f0796d5c73e59786d09a1e617689d1d415f2db44</id>
<content type='text'>
Reducing real_num_queues needs to flush the qdisc otherwise
skbs with queue_mappings greater then real_num_tx_queues can
be sent to the underlying driver.

The flow for this is,

dev_queue_xmit()
	dev_pick_tx()
		skb_tx_hash()  =&gt; hash using real_num_tx_queues
		skb_set_queue_mapping()
	...
	qdisc_enqueue_root() =&gt; enqueue skb on txq from hash
...
dev-&gt;real_num_tx_queues -= n
...
sch_direct_xmit()
	dev_hard_start_xmit()
		ndo_start_xmit(skb,dev) =&gt; skb queue set with old hash

skbs are enqueued on the qdisc with skb-&gt;queue_mapping set
0 &lt; queue_mappings &lt; real_num_tx_queues.  When the driver
decreases real_num_tx_queues skb's may be dequeued from the
qdisc with a queue_mapping greater then real_num_tx_queues.

This fixes a case in ixgbe where this was occurring with DCB
and FCoE. Because the driver is using queue_mapping to map
skbs to tx descriptor rings we can potentially map skbs to
rings that no longer exist.

Signed-off-by: John Fastabend &lt;john.r.fastabend@intel.com&gt;
Tested-by: Ross Brattain &lt;ross.b.brattain@intel.com&gt;
Signed-off-by: Jeff Kirsher &lt;jeffrey.t.kirsher@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Reducing real_num_queues needs to flush the qdisc otherwise
skbs with queue_mappings greater then real_num_tx_queues can
be sent to the underlying driver.

The flow for this is,

dev_queue_xmit()
	dev_pick_tx()
		skb_tx_hash()  =&gt; hash using real_num_tx_queues
		skb_set_queue_mapping()
	...
	qdisc_enqueue_root() =&gt; enqueue skb on txq from hash
...
dev-&gt;real_num_tx_queues -= n
...
sch_direct_xmit()
	dev_hard_start_xmit()
		ndo_start_xmit(skb,dev) =&gt; skb queue set with old hash

skbs are enqueued on the qdisc with skb-&gt;queue_mapping set
0 &lt; queue_mappings &lt; real_num_tx_queues.  When the driver
decreases real_num_tx_queues skb's may be dequeued from the
qdisc with a queue_mapping greater then real_num_tx_queues.

This fixes a case in ixgbe where this was occurring with DCB
and FCoE. Because the driver is using queue_mapping to map
skbs to tx descriptor rings we can potentially map skbs to
rings that no longer exist.

Signed-off-by: John Fastabend &lt;john.r.fastabend@intel.com&gt;
Tested-by: Ross Brattain &lt;ross.b.brattain@intel.com&gt;
Signed-off-by: Jeff Kirsher &lt;jeffrey.t.kirsher@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>net: deliver skbs on inactive slaves to exact matches</title>
<updated>2010-06-11T05:23:34+00:00</updated>
<author>
<name>John Fastabend</name>
<email>john.r.fastabend@intel.com</email>
</author>
<published>2010-06-03T09:30:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=597a264b1a9c7e36d1728f677c66c5c1f7e3b837'/>
<id>597a264b1a9c7e36d1728f677c66c5c1f7e3b837</id>
<content type='text'>
Currently, the accelerated receive path for VLAN's will
drop packets if the real device is an inactive slave and
is not one of the special pkts tested for in
skb_bond_should_drop().  This behavior is different then
the non-accelerated path and for pkts over a bonded vlan.

For example,

vlanx -&gt; bond0 -&gt; ethx

will be dropped in the vlan path and not delivered to any
packet handlers at all.  However,

bond0 -&gt; vlanx -&gt; ethx

and

bond0 -&gt; ethx

will be delivered to handlers that match the exact dev,
because the VLAN path checks the real_dev which is not a
slave and netif_recv_skb() doesn't drop frames but only
delivers them to exact matches.

This patch adds a sk_buff flag which is used for tagging
skbs that would previously been dropped and allows the
skb to continue to skb_netif_recv().  Here we add
logic to check for the deliver_no_wcard flag and if it
is set only deliver to handlers that match exactly.  This
makes both paths above consistent and gives pkt handlers
a way to identify skbs that come from inactive slaves.
Without this patch in some configurations skbs will be
delivered to handlers with exact matches and in others
be dropped out right in the vlan path.

I have tested the following 4 configurations in failover modes
and load balancing modes.

# bond0 -&gt; ethx

# vlanx -&gt; bond0 -&gt; ethx

# bond0 -&gt; vlanx -&gt; ethx

# bond0 -&gt; ethx
            |
  vlanx -&gt; --

Signed-off-by: John Fastabend &lt;john.r.fastabend@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently, the accelerated receive path for VLAN's will
drop packets if the real device is an inactive slave and
is not one of the special pkts tested for in
skb_bond_should_drop().  This behavior is different then
the non-accelerated path and for pkts over a bonded vlan.

For example,

vlanx -&gt; bond0 -&gt; ethx

will be dropped in the vlan path and not delivered to any
packet handlers at all.  However,

bond0 -&gt; vlanx -&gt; ethx

and

bond0 -&gt; ethx

will be delivered to handlers that match the exact dev,
because the VLAN path checks the real_dev which is not a
slave and netif_recv_skb() doesn't drop frames but only
delivers them to exact matches.

This patch adds a sk_buff flag which is used for tagging
skbs that would previously been dropped and allows the
skb to continue to skb_netif_recv().  Here we add
logic to check for the deliver_no_wcard flag and if it
is set only deliver to handlers that match exactly.  This
makes both paths above consistent and gives pkt handlers
a way to identify skbs that come from inactive slaves.
Without this patch in some configurations skbs will be
delivered to handlers with exact matches and in others
be dropped out right in the vlan path.

I have tested the following 4 configurations in failover modes
and load balancing modes.

# bond0 -&gt; ethx

# vlanx -&gt; bond0 -&gt; ethx

# bond0 -&gt; vlanx -&gt; ethx

# bond0 -&gt; ethx
            |
  vlanx -&gt; --

Signed-off-by: John Fastabend &lt;john.r.fastabend@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>net: Print num_rx_queues imbalance warning only when there are allocated queues</title>
<updated>2010-06-09T19:46:03+00:00</updated>
<author>
<name>Tim Gardner</name>
<email>tim.gardner@canonical.com</email>
</author>
<published>2010-06-08T23:51:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=08c801f8d45387a1b46066aad1789a9bb9c4b645'/>
<id>08c801f8d45387a1b46066aad1789a9bb9c4b645</id>
<content type='text'>
BugLink: http://bugs.launchpad.net/bugs/591416

There are a number of network drivers (bridge, bonding, etc) that are not yet
receive multi-queue enabled and use alloc_netdev(), so don't print a
num_rx_queues imbalance warning in that case.

Also, only print the warning once for those drivers that _are_ multi-queue
enabled.

Signed-off-by: Tim Gardner &lt;tim.gardner@canonical.com&gt;
Acked-by: Eric Dumazet &lt;eric.dumazet@gmail.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
BugLink: http://bugs.launchpad.net/bugs/591416

There are a number of network drivers (bridge, bonding, etc) that are not yet
receive multi-queue enabled and use alloc_netdev(), so don't print a
num_rx_queues imbalance warning in that case.

Also, only print the warning once for those drivers that _are_ multi-queue
enabled.

Signed-off-by: Tim Gardner &lt;tim.gardner@canonical.com&gt;
Acked-by: Eric Dumazet &lt;eric.dumazet@gmail.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>net: fix conflict between null_or_orig and null_or_bond</title>
<updated>2010-06-02T10:35:18+00:00</updated>
<author>
<name>John Fastabend</name>
<email>john.r.fastabend@intel.com</email>
</author>
<published>2010-05-12T21:31:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=2df4a0fa1540c460ec69788ab2a901cc72a75644'/>
<id>2df4a0fa1540c460ec69788ab2a901cc72a75644</id>
<content type='text'>
If a skb is received on an inactive bond that does not meet
the special cases checked for by skb_bond_should_drop it should
only be delivered to exact matches as the comment in
netif_receive_skb() says.

However because null_or_bond could also be null this is not
always true.  This patch renames null_or_bond to orig_or_bond
and initializes it to orig_dev.  This keeps the intent of
null_or_bond to pass frames received on VLAN interfaces stacked
on bonding interfaces without invalidating the statement for
null_or_orig.

Signed-off-by: John Fastabend &lt;john.r.fastabend@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If a skb is received on an inactive bond that does not meet
the special cases checked for by skb_bond_should_drop it should
only be delivered to exact matches as the comment in
netif_receive_skb() says.

However because null_or_bond could also be null this is not
always true.  This patch renames null_or_bond to orig_or_bond
and initializes it to orig_dev.  This keeps the intent of
null_or_bond to pass frames received on VLAN interfaces stacked
on bonding interfaces without invalidating the statement for
null_or_orig.

Signed-off-by: John Fastabend &lt;john.r.fastabend@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
</feed>
