<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/net/ipv6/ip6_offload.c, branch v4.10</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>gro: Disable frag0 optimization on IPv6 ext headers</title>
<updated>2017-01-11T02:30:33+00:00</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2017-01-10T20:24:15+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=57ea52a865144aedbcd619ee0081155e658b6f7d'/>
<id>57ea52a865144aedbcd619ee0081155e658b6f7d</id>
<content type='text'>
The GRO fast path caches the frag0 address.  This address becomes
invalid if frag0 is modified by pskb_may_pull or its variants.
So whenever that happens we must disable the frag0 optimization.

This is usually done through the combination of gro_header_hard
and gro_header_slow, however, the IPv6 extension header path did
the pulling directly and would continue to use the GRO fast path
incorrectly.

This patch fixes it by disabling the fast path when we enter the
IPv6 extension header path.

Fixes: 78a478d0efd9 ("gro: Inline skb_gro_header and cache frag0 virtual address")
Reported-by: Slava Shwartsman &lt;slavash@mellanox.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: Eric Dumazet &lt;edumazet@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The GRO fast path caches the frag0 address.  This address becomes
invalid if frag0 is modified by pskb_may_pull or its variants.
So whenever that happens we must disable the frag0 optimization.

This is usually done through the combination of gro_header_hard
and gro_header_slow, however, the IPv6 extension header path did
the pulling directly and would continue to use the GRO fast path
incorrectly.

This patch fixes it by disabling the fast path when we enter the
IPv6 extension header path.

Fixes: 78a478d0efd9 ("gro: Inline skb_gro_header and cache frag0 virtual address")
Reported-by: Slava Shwartsman &lt;slavash@mellanox.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: Eric Dumazet &lt;edumazet@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ip6_offload: check segs for NULL in ipv6_gso_segment.</title>
<updated>2016-12-02T18:34:58+00:00</updated>
<author>
<name>Artem Savkov</name>
<email>asavkov@redhat.com</email>
</author>
<published>2016-12-01T13:06:04+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=6b6ebb6b01c873d0cfe3449e8a1219ee6e5fc022'/>
<id>6b6ebb6b01c873d0cfe3449e8a1219ee6e5fc022</id>
<content type='text'>
segs needs to be checked for being NULL in ipv6_gso_segment() before calling
skb_shinfo(segs), otherwise kernel can run into a NULL-pointer dereference:

[   97.811262] BUG: unable to handle kernel NULL pointer dereference at 00000000000000cc
[   97.819112] IP: [&lt;ffffffff816e52f9&gt;] ipv6_gso_segment+0x119/0x2f0
[   97.825214] PGD 0 [   97.827047]
[   97.828540] Oops: 0000 [#1] SMP
[   97.831678] Modules linked in: vhost_net vhost macvtap macvlan nfsv3 rpcsec_gss_krb5
nfsv4 dns_resolver nfs fscache xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4
iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack
ipt_REJECT nf_reject_ipv4 tun ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter
bridge stp llc snd_hda_codec_realtek snd_hda_codec_hdmi snd_hda_codec_generic snd_hda_intel
snd_hda_codec edac_mce_amd snd_hda_core edac_core snd_hwdep kvm_amd snd_seq kvm snd_seq_device
snd_pcm irqbypass snd_timer ppdev parport_serial snd parport_pc k10temp pcspkr soundcore parport
sp5100_tco shpchp sg wmi i2c_piix4 acpi_cpufreq nfsd auth_rpcgss nfs_acl lockd grace sunrpc
ip_tables xfs libcrc32c sr_mod cdrom sd_mod ata_generic pata_acpi amdkfd amd_iommu_v2 radeon
broadcom bcm_phy_lib i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops
ttm ahci serio_raw tg3 firewire_ohci libahci pata_atiixp drm ptp libata firewire_core pps_core
i2c_core crc_itu_t fjes dm_mirror dm_region_hash dm_log dm_mod
[   97.927721] CPU: 1 PID: 3504 Comm: vhost-3495 Not tainted 4.9.0-7.el7.test.x86_64 #1
[   97.935457] Hardware name: AMD Snook/Snook, BIOS ESK0726A 07/26/2010
[   97.941806] task: ffff880129a1c080 task.stack: ffffc90001bcc000
[   97.947720] RIP: 0010:[&lt;ffffffff816e52f9&gt;]  [&lt;ffffffff816e52f9&gt;] ipv6_gso_segment+0x119/0x2f0
[   97.956251] RSP: 0018:ffff88012fc43a10  EFLAGS: 00010207
[   97.961557] RAX: 0000000000000000 RBX: ffff8801292c8700 RCX: 0000000000000594
[   97.968687] RDX: 0000000000000593 RSI: ffff880129a846c0 RDI: 0000000000240000
[   97.975814] RBP: ffff88012fc43a68 R08: ffff880129a8404e R09: 0000000000000000
[   97.982942] R10: 0000000000000000 R11: ffff880129a84076 R12: 00000020002949b3
[   97.990070] R13: ffff88012a580000 R14: 0000000000000000 R15: ffff88012a580000
[   97.997198] FS:  0000000000000000(0000) GS:ffff88012fc40000(0000) knlGS:0000000000000000
[   98.005280] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   98.011021] CR2: 00000000000000cc CR3: 0000000126c5d000 CR4: 00000000000006e0
[   98.018149] Stack:
[   98.020157]  00000000ffffffff ffff88012fc43ac8 ffffffffa017ad0a 000000000000000e
[   98.027584]  0000001300000000 0000000077d59998 ffff8801292c8700 00000020002949b3
[   98.035010]  ffff88012a580000 0000000000000000 ffff88012a580000 ffff88012fc43a98
[   98.042437] Call Trace:
[   98.044879]  &lt;IRQ&gt; [   98.046803]  [&lt;ffffffffa017ad0a&gt;] ? tg3_start_xmit+0x84a/0xd60 [tg3]
[   98.053156]  [&lt;ffffffff815eeee0&gt;] skb_mac_gso_segment+0xb0/0x130
[   98.059158]  [&lt;ffffffff815eefd3&gt;] __skb_gso_segment+0x73/0x110
[   98.064985]  [&lt;ffffffff815ef40d&gt;] validate_xmit_skb+0x12d/0x2b0
[   98.070899]  [&lt;ffffffff815ef5d2&gt;] validate_xmit_skb_list+0x42/0x70
[   98.077073]  [&lt;ffffffff81618560&gt;] sch_direct_xmit+0xd0/0x1b0
[   98.082726]  [&lt;ffffffff815efd86&gt;] __dev_queue_xmit+0x486/0x690
[   98.088554]  [&lt;ffffffff8135c135&gt;] ? cpumask_next_and+0x35/0x50
[   98.094380]  [&lt;ffffffff815effa0&gt;] dev_queue_xmit+0x10/0x20
[   98.099863]  [&lt;ffffffffa09ce057&gt;] br_dev_queue_push_xmit+0xa7/0x170 [bridge]
[   98.106907]  [&lt;ffffffffa09ce161&gt;] br_forward_finish+0x41/0xc0 [bridge]
[   98.113430]  [&lt;ffffffff81627cf2&gt;] ? nf_iterate+0x52/0x60
[   98.118735]  [&lt;ffffffff81627d6b&gt;] ? nf_hook_slow+0x6b/0xc0
[   98.124216]  [&lt;ffffffffa09ce32c&gt;] __br_forward+0x14c/0x1e0 [bridge]
[   98.130480]  [&lt;ffffffffa09ce120&gt;] ? br_dev_queue_push_xmit+0x170/0x170 [bridge]
[   98.137785]  [&lt;ffffffffa09ce4bd&gt;] br_forward+0x9d/0xb0 [bridge]
[   98.143701]  [&lt;ffffffffa09cfbb7&gt;] br_handle_frame_finish+0x267/0x560 [bridge]
[   98.150834]  [&lt;ffffffffa09d0064&gt;] br_handle_frame+0x174/0x2f0 [bridge]
[   98.157355]  [&lt;ffffffff8102fb89&gt;] ? sched_clock+0x9/0x10
[   98.162662]  [&lt;ffffffff810b63b2&gt;] ? sched_clock_cpu+0x72/0xa0
[   98.168403]  [&lt;ffffffff815eccf5&gt;] __netif_receive_skb_core+0x1e5/0xa20
[   98.174926]  [&lt;ffffffff813659f9&gt;] ? timerqueue_add+0x59/0xb0
[   98.180580]  [&lt;ffffffff815ed548&gt;] __netif_receive_skb+0x18/0x60
[   98.186494]  [&lt;ffffffff815ee625&gt;] process_backlog+0x95/0x140
[   98.192145]  [&lt;ffffffff815edccd&gt;] net_rx_action+0x16d/0x380
[   98.197713]  [&lt;ffffffff8170cff1&gt;] __do_softirq+0xd1/0x283
[   98.203106]  [&lt;ffffffff8170b2bc&gt;] do_softirq_own_stack+0x1c/0x30
[   98.209107]  &lt;EOI&gt; [   98.211029]  [&lt;ffffffff8108a5c0&gt;] do_softirq+0x50/0x60
[   98.216166]  [&lt;ffffffff815ec853&gt;] netif_rx_ni+0x33/0x80
[   98.221386]  [&lt;ffffffffa09eeff7&gt;] tun_get_user+0x487/0x7f0 [tun]
[   98.227388]  [&lt;ffffffffa09ef3ab&gt;] tun_sendmsg+0x4b/0x60 [tun]
[   98.233129]  [&lt;ffffffffa0b68932&gt;] handle_tx+0x282/0x540 [vhost_net]
[   98.239392]  [&lt;ffffffffa0b68c25&gt;] handle_tx_kick+0x15/0x20 [vhost_net]
[   98.245916]  [&lt;ffffffffa0abacfe&gt;] vhost_worker+0x9e/0xf0 [vhost]
[   98.251919]  [&lt;ffffffffa0abac60&gt;] ? vhost_umem_alloc+0x40/0x40 [vhost]
[   98.258440]  [&lt;ffffffff81003a47&gt;] ? do_syscall_64+0x67/0x180
[   98.264094]  [&lt;ffffffff810a44d9&gt;] kthread+0xd9/0xf0
[   98.268965]  [&lt;ffffffff810a4400&gt;] ? kthread_park+0x60/0x60
[   98.274444]  [&lt;ffffffff8170a4d5&gt;] ret_from_fork+0x25/0x30
[   98.279836] Code: 8b 93 d8 00 00 00 48 2b 93 d0 00 00 00 4c 89 e6 48 89 df 66 89 93 c2 00 00 00 ff 10 48 3d 00 f0 ff ff 49 89 c2 0f 87 52 01 00 00 &lt;41&gt; 8b 92 cc 00 00 00 48 8b 80 d0 00 00 00 44 0f b7 74 10 06 66
[   98.299425] RIP  [&lt;ffffffff816e52f9&gt;] ipv6_gso_segment+0x119/0x2f0
[   98.305612]  RSP &lt;ffff88012fc43a10&gt;
[   98.309094] CR2: 00000000000000cc
[   98.312406] ---[ end trace 726a2c7a2d2d78d0 ]---

Signed-off-by: Artem Savkov &lt;asavkov@redhat.com&gt;
Acked-by: Eric Dumazet &lt;edumazet@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
segs needs to be checked for being NULL in ipv6_gso_segment() before calling
skb_shinfo(segs), otherwise kernel can run into a NULL-pointer dereference:

[   97.811262] BUG: unable to handle kernel NULL pointer dereference at 00000000000000cc
[   97.819112] IP: [&lt;ffffffff816e52f9&gt;] ipv6_gso_segment+0x119/0x2f0
[   97.825214] PGD 0 [   97.827047]
[   97.828540] Oops: 0000 [#1] SMP
[   97.831678] Modules linked in: vhost_net vhost macvtap macvlan nfsv3 rpcsec_gss_krb5
nfsv4 dns_resolver nfs fscache xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4
iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack
ipt_REJECT nf_reject_ipv4 tun ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter
bridge stp llc snd_hda_codec_realtek snd_hda_codec_hdmi snd_hda_codec_generic snd_hda_intel
snd_hda_codec edac_mce_amd snd_hda_core edac_core snd_hwdep kvm_amd snd_seq kvm snd_seq_device
snd_pcm irqbypass snd_timer ppdev parport_serial snd parport_pc k10temp pcspkr soundcore parport
sp5100_tco shpchp sg wmi i2c_piix4 acpi_cpufreq nfsd auth_rpcgss nfs_acl lockd grace sunrpc
ip_tables xfs libcrc32c sr_mod cdrom sd_mod ata_generic pata_acpi amdkfd amd_iommu_v2 radeon
broadcom bcm_phy_lib i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops
ttm ahci serio_raw tg3 firewire_ohci libahci pata_atiixp drm ptp libata firewire_core pps_core
i2c_core crc_itu_t fjes dm_mirror dm_region_hash dm_log dm_mod
[   97.927721] CPU: 1 PID: 3504 Comm: vhost-3495 Not tainted 4.9.0-7.el7.test.x86_64 #1
[   97.935457] Hardware name: AMD Snook/Snook, BIOS ESK0726A 07/26/2010
[   97.941806] task: ffff880129a1c080 task.stack: ffffc90001bcc000
[   97.947720] RIP: 0010:[&lt;ffffffff816e52f9&gt;]  [&lt;ffffffff816e52f9&gt;] ipv6_gso_segment+0x119/0x2f0
[   97.956251] RSP: 0018:ffff88012fc43a10  EFLAGS: 00010207
[   97.961557] RAX: 0000000000000000 RBX: ffff8801292c8700 RCX: 0000000000000594
[   97.968687] RDX: 0000000000000593 RSI: ffff880129a846c0 RDI: 0000000000240000
[   97.975814] RBP: ffff88012fc43a68 R08: ffff880129a8404e R09: 0000000000000000
[   97.982942] R10: 0000000000000000 R11: ffff880129a84076 R12: 00000020002949b3
[   97.990070] R13: ffff88012a580000 R14: 0000000000000000 R15: ffff88012a580000
[   97.997198] FS:  0000000000000000(0000) GS:ffff88012fc40000(0000) knlGS:0000000000000000
[   98.005280] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   98.011021] CR2: 00000000000000cc CR3: 0000000126c5d000 CR4: 00000000000006e0
[   98.018149] Stack:
[   98.020157]  00000000ffffffff ffff88012fc43ac8 ffffffffa017ad0a 000000000000000e
[   98.027584]  0000001300000000 0000000077d59998 ffff8801292c8700 00000020002949b3
[   98.035010]  ffff88012a580000 0000000000000000 ffff88012a580000 ffff88012fc43a98
[   98.042437] Call Trace:
[   98.044879]  &lt;IRQ&gt; [   98.046803]  [&lt;ffffffffa017ad0a&gt;] ? tg3_start_xmit+0x84a/0xd60 [tg3]
[   98.053156]  [&lt;ffffffff815eeee0&gt;] skb_mac_gso_segment+0xb0/0x130
[   98.059158]  [&lt;ffffffff815eefd3&gt;] __skb_gso_segment+0x73/0x110
[   98.064985]  [&lt;ffffffff815ef40d&gt;] validate_xmit_skb+0x12d/0x2b0
[   98.070899]  [&lt;ffffffff815ef5d2&gt;] validate_xmit_skb_list+0x42/0x70
[   98.077073]  [&lt;ffffffff81618560&gt;] sch_direct_xmit+0xd0/0x1b0
[   98.082726]  [&lt;ffffffff815efd86&gt;] __dev_queue_xmit+0x486/0x690
[   98.088554]  [&lt;ffffffff8135c135&gt;] ? cpumask_next_and+0x35/0x50
[   98.094380]  [&lt;ffffffff815effa0&gt;] dev_queue_xmit+0x10/0x20
[   98.099863]  [&lt;ffffffffa09ce057&gt;] br_dev_queue_push_xmit+0xa7/0x170 [bridge]
[   98.106907]  [&lt;ffffffffa09ce161&gt;] br_forward_finish+0x41/0xc0 [bridge]
[   98.113430]  [&lt;ffffffff81627cf2&gt;] ? nf_iterate+0x52/0x60
[   98.118735]  [&lt;ffffffff81627d6b&gt;] ? nf_hook_slow+0x6b/0xc0
[   98.124216]  [&lt;ffffffffa09ce32c&gt;] __br_forward+0x14c/0x1e0 [bridge]
[   98.130480]  [&lt;ffffffffa09ce120&gt;] ? br_dev_queue_push_xmit+0x170/0x170 [bridge]
[   98.137785]  [&lt;ffffffffa09ce4bd&gt;] br_forward+0x9d/0xb0 [bridge]
[   98.143701]  [&lt;ffffffffa09cfbb7&gt;] br_handle_frame_finish+0x267/0x560 [bridge]
[   98.150834]  [&lt;ffffffffa09d0064&gt;] br_handle_frame+0x174/0x2f0 [bridge]
[   98.157355]  [&lt;ffffffff8102fb89&gt;] ? sched_clock+0x9/0x10
[   98.162662]  [&lt;ffffffff810b63b2&gt;] ? sched_clock_cpu+0x72/0xa0
[   98.168403]  [&lt;ffffffff815eccf5&gt;] __netif_receive_skb_core+0x1e5/0xa20
[   98.174926]  [&lt;ffffffff813659f9&gt;] ? timerqueue_add+0x59/0xb0
[   98.180580]  [&lt;ffffffff815ed548&gt;] __netif_receive_skb+0x18/0x60
[   98.186494]  [&lt;ffffffff815ee625&gt;] process_backlog+0x95/0x140
[   98.192145]  [&lt;ffffffff815edccd&gt;] net_rx_action+0x16d/0x380
[   98.197713]  [&lt;ffffffff8170cff1&gt;] __do_softirq+0xd1/0x283
[   98.203106]  [&lt;ffffffff8170b2bc&gt;] do_softirq_own_stack+0x1c/0x30
[   98.209107]  &lt;EOI&gt; [   98.211029]  [&lt;ffffffff8108a5c0&gt;] do_softirq+0x50/0x60
[   98.216166]  [&lt;ffffffff815ec853&gt;] netif_rx_ni+0x33/0x80
[   98.221386]  [&lt;ffffffffa09eeff7&gt;] tun_get_user+0x487/0x7f0 [tun]
[   98.227388]  [&lt;ffffffffa09ef3ab&gt;] tun_sendmsg+0x4b/0x60 [tun]
[   98.233129]  [&lt;ffffffffa0b68932&gt;] handle_tx+0x282/0x540 [vhost_net]
[   98.239392]  [&lt;ffffffffa0b68c25&gt;] handle_tx_kick+0x15/0x20 [vhost_net]
[   98.245916]  [&lt;ffffffffa0abacfe&gt;] vhost_worker+0x9e/0xf0 [vhost]
[   98.251919]  [&lt;ffffffffa0abac60&gt;] ? vhost_umem_alloc+0x40/0x40 [vhost]
[   98.258440]  [&lt;ffffffff81003a47&gt;] ? do_syscall_64+0x67/0x180
[   98.264094]  [&lt;ffffffff810a44d9&gt;] kthread+0xd9/0xf0
[   98.268965]  [&lt;ffffffff810a4400&gt;] ? kthread_park+0x60/0x60
[   98.274444]  [&lt;ffffffff8170a4d5&gt;] ret_from_fork+0x25/0x30
[   98.279836] Code: 8b 93 d8 00 00 00 48 2b 93 d0 00 00 00 4c 89 e6 48 89 df 66 89 93 c2 00 00 00 ff 10 48 3d 00 f0 ff ff 49 89 c2 0f 87 52 01 00 00 &lt;41&gt; 8b 92 cc 00 00 00 48 8b 80 d0 00 00 00 44 0f b7 74 10 06 66
[   98.299425] RIP  [&lt;ffffffff816e52f9&gt;] ipv6_gso_segment+0x119/0x2f0
[   98.305612]  RSP &lt;ffff88012fc43a10&gt;
[   98.309094] CR2: 00000000000000cc
[   98.312406] ---[ end trace 726a2c7a2d2d78d0 ]---

Signed-off-by: Artem Savkov &lt;asavkov@redhat.com&gt;
Acked-by: Eric Dumazet &lt;edumazet@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>net: add recursion limit to GRO</title>
<updated>2016-10-20T18:32:22+00:00</updated>
<author>
<name>Sabrina Dubroca</name>
<email>sd@queasysnail.net</email>
</author>
<published>2016-10-20T13:58:02+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=fcd91dd449867c6bfe56a81cabba76b829fd05cd'/>
<id>fcd91dd449867c6bfe56a81cabba76b829fd05cd</id>
<content type='text'>
Currently, GRO can do unlimited recursion through the gro_receive
handlers.  This was fixed for tunneling protocols by limiting tunnel GRO
to one level with encap_mark, but both VLAN and TEB still have this
problem.  Thus, the kernel is vulnerable to a stack overflow, if we
receive a packet composed entirely of VLAN headers.

This patch adds a recursion counter to the GRO layer to prevent stack
overflow.  When a gro_receive function hits the recursion limit, GRO is
aborted for this skb and it is processed normally.  This recursion
counter is put in the GRO CB, but could be turned into a percpu counter
if we run out of space in the CB.

Thanks to Vladimír Beneš &lt;vbenes@redhat.com&gt; for the initial bug report.

Fixes: CVE-2016-7039
Fixes: 9b174d88c257 ("net: Add Transparent Ethernet Bridging GRO support.")
Fixes: 66e5133f19e9 ("vlan: Add GRO support for non hardware accelerated vlan")
Signed-off-by: Sabrina Dubroca &lt;sd@queasysnail.net&gt;
Reviewed-by: Jiri Benc &lt;jbenc@redhat.com&gt;
Acked-by: Hannes Frederic Sowa &lt;hannes@stressinduktion.org&gt;
Acked-by: Tom Herbert &lt;tom@herbertland.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently, GRO can do unlimited recursion through the gro_receive
handlers.  This was fixed for tunneling protocols by limiting tunnel GRO
to one level with encap_mark, but both VLAN and TEB still have this
problem.  Thus, the kernel is vulnerable to a stack overflow, if we
receive a packet composed entirely of VLAN headers.

This patch adds a recursion counter to the GRO layer to prevent stack
overflow.  When a gro_receive function hits the recursion limit, GRO is
aborted for this skb and it is processed normally.  This recursion
counter is put in the GRO CB, but could be turned into a percpu counter
if we run out of space in the CB.

Thanks to Vladimír Beneš &lt;vbenes@redhat.com&gt; for the initial bug report.

Fixes: CVE-2016-7039
Fixes: 9b174d88c257 ("net: Add Transparent Ethernet Bridging GRO support.")
Fixes: 66e5133f19e9 ("vlan: Add GRO support for non hardware accelerated vlan")
Signed-off-by: Sabrina Dubroca &lt;sd@queasysnail.net&gt;
Reviewed-by: Jiri Benc &lt;jbenc@redhat.com&gt;
Acked-by: Hannes Frederic Sowa &lt;hannes@stressinduktion.org&gt;
Acked-by: Tom Herbert &lt;tom@herbertland.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gso: Support partial splitting at the frag_list pointer</title>
<updated>2016-09-20T00:59:34+00:00</updated>
<author>
<name>Steffen Klassert</name>
<email>steffen.klassert@secunet.com</email>
</author>
<published>2016-09-19T10:58:47+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=07b26c9454a2a19fff86d6fcf2aba6bc801eb8d8'/>
<id>07b26c9454a2a19fff86d6fcf2aba6bc801eb8d8</id>
<content type='text'>
Since commit 8a29111c7 ("net: gro: allow to build full sized skb")
gro may build buffers with a frag_list. This can hurt forwarding
because most NICs can't offload such packets, they need to be
segmented in software. This patch splits buffers with a frag_list
at the frag_list pointer into buffers that can be TSO offloaded.

Signed-off-by: Steffen Klassert &lt;steffen.klassert@secunet.com&gt;
Acked-by: Alexander Duyck &lt;alexander.h.duyck@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Since commit 8a29111c7 ("net: gro: allow to build full sized skb")
gro may build buffers with a frag_list. This can hurt forwarding
because most NICs can't offload such packets, they need to be
segmented in software. This patch splits buffers with a frag_list
at the frag_list pointer into buffers that can be TSO offloaded.

Signed-off-by: Steffen Klassert &lt;steffen.klassert@secunet.com&gt;
Acked-by: Alexander Duyck &lt;alexander.h.duyck@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ip4ip6: Support for GSO/GRO</title>
<updated>2016-05-20T22:03:17+00:00</updated>
<author>
<name>Tom Herbert</name>
<email>tom@herbertland.com</email>
</author>
<published>2016-05-18T16:06:23+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=b8921ca83eed2496108ee308e9a41c5084089680'/>
<id>b8921ca83eed2496108ee308e9a41c5084089680</id>
<content type='text'>
Signed-off-by: Tom Herbert &lt;tom@herbertland.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Signed-off-by: Tom Herbert &lt;tom@herbertland.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ip6ip6: Support for GSO/GRO</title>
<updated>2016-05-20T22:03:17+00:00</updated>
<author>
<name>Tom Herbert</name>
<email>tom@herbertland.com</email>
</author>
<published>2016-05-18T16:06:22+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=815d22e55b0eba3bfb8f0ba532ce9ae364fee556'/>
<id>815d22e55b0eba3bfb8f0ba532ce9ae364fee556</id>
<content type='text'>
Signed-off-by: Tom Herbert &lt;tom@herbertland.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Signed-off-by: Tom Herbert &lt;tom@herbertland.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>net: define gso types for IPx over IPv4 and IPv6</title>
<updated>2016-05-20T22:03:15+00:00</updated>
<author>
<name>Tom Herbert</name>
<email>tom@herbertland.com</email>
</author>
<published>2016-05-18T16:06:10+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=7e13318daa4a67bff2f800923a993ef3818b3c53'/>
<id>7e13318daa4a67bff2f800923a993ef3818b3c53</id>
<content type='text'>
This patch defines two new GSO definitions SKB_GSO_IPXIP4 and
SKB_GSO_IPXIP6 along with corresponding NETIF_F_GSO_IPXIP4 and
NETIF_F_GSO_IPXIP6. These are used to described IP in IP
tunnel and what the outer protocol is. The inner protocol
can be deduced from other GSO types (e.g. SKB_GSO_TCPV4 and
SKB_GSO_TCPV6). The GSO types of SKB_GSO_IPIP and SKB_GSO_SIT
are removed (these are both instances of SKB_GSO_IPXIP4).
SKB_GSO_IPXIP6 will be used when support for GSO with IP
encapsulation over IPv6 is added.

Signed-off-by: Tom Herbert &lt;tom@herbertland.com&gt;
Acked-by: Jeff Kirsher &lt;jeffrey.t.kirsher@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch defines two new GSO definitions SKB_GSO_IPXIP4 and
SKB_GSO_IPXIP6 along with corresponding NETIF_F_GSO_IPXIP4 and
NETIF_F_GSO_IPXIP6. These are used to described IP in IP
tunnel and what the outer protocol is. The inner protocol
can be deduced from other GSO types (e.g. SKB_GSO_TCPV4 and
SKB_GSO_TCPV6). The GSO types of SKB_GSO_IPIP and SKB_GSO_SIT
are removed (these are both instances of SKB_GSO_IPXIP4).
SKB_GSO_IPXIP6 will be used when support for GSO with IP
encapsulation over IPv6 is added.

Signed-off-by: Tom Herbert &lt;tom@herbertland.com&gt;
Acked-by: Jeff Kirsher &lt;jeffrey.t.kirsher@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>gso: Remove arbitrary checks for unsupported GSO</title>
<updated>2016-05-20T22:03:15+00:00</updated>
<author>
<name>Tom Herbert</name>
<email>tom@herbertland.com</email>
</author>
<published>2016-05-18T16:06:09+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=5c7cdf339af560f980b12eb6b0b5aa5f68ac6658'/>
<id>5c7cdf339af560f980b12eb6b0b5aa5f68ac6658</id>
<content type='text'>
In several gso_segment functions there are checks of gso_type against
a seemingly arbitrary list of SKB_GSO_* flags. This seems like an
attempt to identify unsupported GSO types, but since the stack is
the one that set these GSO types in the first place this seems
unnecessary to do. If a combination isn't valid in the first
place that stack should not allow setting it.

This is a code simplication especially for add new GSO types.

Signed-off-by: Tom Herbert &lt;tom@herbertland.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In several gso_segment functions there are checks of gso_type against
a seemingly arbitrary list of SKB_GSO_* flags. This seems like an
attempt to identify unsupported GSO types, but since the stack is
the one that set these GSO types in the first place this seems
unnecessary to do. If a combination isn't valid in the first
place that stack should not allow setting it.

This is a code simplication especially for add new GSO types.

Signed-off-by: Tom Herbert &lt;tom@herbertland.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>GSO: Support partial segmentation offload</title>
<updated>2016-04-14T20:23:41+00:00</updated>
<author>
<name>Alexander Duyck</name>
<email>aduyck@mirantis.com</email>
</author>
<published>2016-04-11T01:45:03+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=802ab55adc39a06940a1b384e9fd0387fc762d7e'/>
<id>802ab55adc39a06940a1b384e9fd0387fc762d7e</id>
<content type='text'>
This patch adds support for something I am referring to as GSO partial.
The basic idea is that we can support a broader range of devices for
segmentation if we use fixed outer headers and have the hardware only
really deal with segmenting the inner header.  The idea behind the naming
is due to the fact that everything before csum_start will be fixed headers,
and everything after will be the region that is handled by hardware.

With the current implementation it allows us to add support for the
following GSO types with an inner TSO_MANGLEID or TSO6 offload:
NETIF_F_GSO_GRE
NETIF_F_GSO_GRE_CSUM
NETIF_F_GSO_IPIP
NETIF_F_GSO_SIT
NETIF_F_UDP_TUNNEL
NETIF_F_UDP_TUNNEL_CSUM

In the case of hardware that already supports tunneling we may be able to
extend this further to support TSO_TCPV4 without TSO_MANGLEID if the
hardware can support updating inner IPv4 headers.

Signed-off-by: Alexander Duyck &lt;aduyck@mirantis.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch adds support for something I am referring to as GSO partial.
The basic idea is that we can support a broader range of devices for
segmentation if we use fixed outer headers and have the hardware only
really deal with segmenting the inner header.  The idea behind the naming
is due to the fact that everything before csum_start will be fixed headers,
and everything after will be the region that is handled by hardware.

With the current implementation it allows us to add support for the
following GSO types with an inner TSO_MANGLEID or TSO6 offload:
NETIF_F_GSO_GRE
NETIF_F_GSO_GRE_CSUM
NETIF_F_GSO_IPIP
NETIF_F_GSO_SIT
NETIF_F_UDP_TUNNEL
NETIF_F_UDP_TUNNEL_CSUM

In the case of hardware that already supports tunneling we may be able to
extend this further to support TSO_TCPV4 without TSO_MANGLEID if the
hardware can support updating inner IPv4 headers.

Signed-off-by: Alexander Duyck &lt;aduyck@mirantis.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>GRO: Add support for TCP with fixed IPv4 ID field, limit tunnel IP ID values</title>
<updated>2016-04-14T20:23:41+00:00</updated>
<author>
<name>Alexander Duyck</name>
<email>aduyck@mirantis.com</email>
</author>
<published>2016-04-11T01:44:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=1530545ed64b42e87acb43c0c16401bd1ebae6bf'/>
<id>1530545ed64b42e87acb43c0c16401bd1ebae6bf</id>
<content type='text'>
This patch does two things.

First it allows TCP to aggregate TCP frames with a fixed IPv4 ID field.  As
a result we should now be able to aggregate flows that were converted from
IPv6 to IPv4.  In addition this allows us more flexibility for future
implementations of segmentation as we may be able to use a fixed IP ID when
segmenting the flow.

The second thing this does is that it places limitations on the outer IPv4
ID header in the case of tunneled frames.  Specifically it forces the IP ID
to be incrementing by 1 unless the DF bit is set in the outer IPv4 header.
This way we can avoid creating overlapping series of IP IDs that could
possibly be fragmented if the frame goes through GRO and is then
resegmented via GSO.

Signed-off-by: Alexander Duyck &lt;aduyck@mirantis.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch does two things.

First it allows TCP to aggregate TCP frames with a fixed IPv4 ID field.  As
a result we should now be able to aggregate flows that were converted from
IPv6 to IPv4.  In addition this allows us more flexibility for future
implementations of segmentation as we may be able to use a fixed IP ID when
segmenting the flow.

The second thing this does is that it places limitations on the outer IPv4
ID header in the case of tunneled frames.  Specifically it forces the IP ID
to be incrementing by 1 unless the DF bit is set in the outer IPv4 header.
This way we can avoid creating overlapping series of IP IDs that could
possibly be fragmented if the frame goes through GRO and is then
resegmented via GSO.

Signed-off-by: Alexander Duyck &lt;aduyck@mirantis.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
</feed>
