summaryrefslogtreecommitdiff
path: root/net
AgeCommit message (Collapse)Author
2005-10-04[PATCH] ieee80211: fix gfp flags typeRandy Dunlap
Fix implicit nocast warnings in ieee80211 code, including __nocast: net/ieee80211/ieee80211_tx.c:215:9: warning: implicit cast to nocast type Signed-off-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
2005-10-03[PATCH] ieee80211: fix gfp flags typeRandy Dunlap
Fix implicit nocast warnings in ieee80211 code: net/ieee80211/ieee80211_tx.c:215:9: warning: implicit cast to nocast type Signed-off-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Jeff Garzik <jgarzik@pobox.com>
2005-10-03[IPV4]: Update icmp sysctl docs and disable broadcast ECHO/TIMESTAMP by defaultDavid S. Miller
It's not a good idea to be smurf'able by default. The few people who need this can turn it on. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03[IPV4]: Get rid of bogus __in_put_dev in pktgenHerbert Xu
This patch gets rid of a bogus __in_dev_put() in pktgen.c. This was spotted by Suzanne Wood. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03[IPV4]: Replace __in_dev_get with __in_dev_get_rcu/rtnlHerbert Xu
The following patch renames __in_dev_get() to __in_dev_get_rtnl() and introduces __in_dev_get_rcu() to cover the second case. 1) RCU with refcnt should use in_dev_get(). 2) RCU without refcnt should use __in_dev_get_rcu(). 3) All others must hold RTNL and use __in_dev_get_rtnl(). There is one exception in net/ipv4/route.c which is in fact a pre-existing race condition. I've marked it as such so that we remember to fix it. This patch is based on suggestions and prior work by Suzanne Wood and Paul McKenney. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03[IPV6]: Fix leak added by udp connect dst caching fix.David S. Miller
Based upon a patch from Mitsuru KANDA <mk@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03[IPV6]: Fix ipv6 fragment ID selection at slow pathYan Zheng
Signed-Off-By: Yan Zheng <yanzheng@21cn.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03[IPV4]: Fix "Proxy ARP seems broken"Herbert Xu
Meelis Roos <mroos@linux.ee> wrote: > RK> My firewall setup relies on proxyarp working. However, with 2.6.14-rc3, > RK> it appears to be completely broken. The firewall is 212.18.232.186, > > Same here with some kernel between 14-rc2 and 14-rc3 - no reposnse to > ARP on a proxyarp gateway. Sorry, no exact revison and no more debugging > yet since it'a a production gateway. The breakage is caused by the change to use the CB area for flagging whether a packet has been queued due to proxy_delay. This area gets cleared every time arp_rcv gets called. Unfortunately packets delayed due to proxy_delay also go through arp_rcv when they are reprocessed. In fact, I can't think of a reason why delayed proxy packets should go through netfilter again at all. So the easiest solution is to bypass that and go straight to arp_process. This is essentially what would've happened before netfilter support was added to ARP. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03[NET]: Fix "sysctl_net.c:36: error: 'core_table' undeclared here"Russell King
During the build for ARM machine type "fortunet", this error occurred: CC net/sysctl_net.o net/sysctl_net.c:36: error: 'core_table' undeclared here (not in a function) It appears that the following configuration settings cause this error due to a missing include: CONFIG_SYSCTL=y CONFIG_NET=y # CONFIG_INET is not set core_table appears to be declared in net/sock.h. if CONFIG_INET were defined, net/sock.h would have been included via: sysctl_net.c -> net/ip.h -> linux/ip.h -> net/sock.h so include it directly. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03[INET]: speedup inet (tcp/dccp) lookupsEric Dumazet
Arnaldo and I agreed it could be applied now, because I have other pending patches depending on this one (Thank you Arnaldo) (The other important patch moves skc_refcnt in a separate cache line, so that the SMP/NUMA performance doesnt suffer from cache line ping pongs) 1) First some performance data : -------------------------------- tcp_v4_rcv() wastes a *lot* of time in __inet_lookup_established() The most time critical code is : sk_for_each(sk, node, &head->chain) { if (INET_MATCH(sk, acookie, saddr, daddr, ports, dif)) goto hit; /* You sunk my battleship! */ } The sk_for_each() does use prefetch() hints but only the begining of "struct sock" is prefetched. As INET_MATCH first comparison uses inet_sk(__sk)->daddr, wich is far away from the begining of "struct sock", it has to bring into CPU cache cold cache line. Each iteration has to use at least 2 cache lines. This can be problematic if some chains are very long. 2) The goal ----------- The idea I had is to change things so that INET_MATCH() may return FALSE in 99% of cases only using the data already in the CPU cache, using one cache line per iteration. 3) Description of the patch --------------------------- Adds a new 'unsigned int skc_hash' field in 'struct sock_common', filling a 32 bits hole on 64 bits platform. struct sock_common { unsigned short skc_family; volatile unsigned char skc_state; unsigned char skc_reuse; int skc_bound_dev_if; struct hlist_node skc_node; struct hlist_node skc_bind_node; atomic_t skc_refcnt; + unsigned int skc_hash; struct proto *skc_prot; }; Store in this 32 bits field the full hash, not masked by (ehash_size - 1) Using this full hash as the first comparison done in INET_MATCH permits us immediatly skip the element without touching a second cache line in case of a miss. Suppress the sk_hashent/tw_hashent fields since skc_hash (aliased to sk_hash and tw_hash) already contains the slot number if we mask with (ehash_size - 1) File include/net/inet_hashtables.h 64 bits platforms : #define INET_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\ (((__sk)->sk_hash == (__hash)) ((*((__u64 *)&(inet_sk(__sk)->daddr)))== (__cookie)) && \ ((*((__u32 *)&(inet_sk(__sk)->dport))) == (__ports)) && \ (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif)))) 32bits platforms: #define TCP_IPV4_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\ (((__sk)->sk_hash == (__hash)) && \ (inet_sk(__sk)->daddr == (__saddr)) && \ (inet_sk(__sk)->rcv_saddr == (__daddr)) && \ (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif)))) - Adds a prefetch(head->chain.first) in __inet_lookup_established()/__tcp_v4_check_established() and __inet6_lookup_established()/__tcp_v6_check_established() and __dccp_v4_check_established() to bring into cache the first element of the list, before the {read|write}_lock(&head->lock); Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Acked-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03[NET]: Fix packet timestamping.Herbert Xu
I've found the problem in general. It affects any 64-bit architecture. The problem occurs when you change the system time. Suppose that when you boot your system clock is forward by a day. This gets recorded down in skb_tv_base. You then wind the clock back by a day. From that point onwards the offset will be negative which essentially overflows the 32-bit variables they're stored in. In fact, why don't we just store the real time stamp in those 32-bit variables? After all, we're not going to overflow for quite a while yet. When we do overflow, we'll need a better solution of course. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-29[ATM]: [lec] reset retry counter when new arp issuedScott Talbert
From: Scott Talbert <scott.talbert@lmco.com> Signed-off-by: Chas Williams <chas@cmf.nrl.navy.mil> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-29[ATM]: [lec] attempt to support cisco failoverScott Talbert
From: Scott Talbert <scott.talbert@lmco.com> Signed-off-by: Chas Williams <chas@cmf.nrl.navy.mil> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-29[TCP]: Don't over-clamp window in tcp_clamp_window()Alexey Kuznetsov
From: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Handle better the case where the sender sends full sized frames initially, then moves to a mode where it trickles out small amounts of data at a time. This known problem is even mentioned in the comments above tcp_grow_window() in tcp_input.c, specifically: ... * The scheme does not work when sender sends good segments opening * window and then starts to feed us spagetti. But it should work * in common situations. Otherwise, we have to rely on queue collapsing. ... When the sender gives full sized frames, the "struct sk_buff" overhead from each packet is small. So we'll advertize a larger window. If the sender moves to a mode where small segments are sent, this ratio becomes tilted to the other extreme and we start overrunning the socket buffer space. tcp_clamp_window() tries to address this, but it's clamping of tp->window_clamp is a wee bit too aggressive for this particular case. Fix confirmed by Ion Badulescu. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-29[TCP]: Revert 6b251858d377196b8cea20e65cae60f584a42735David S. Miller
But retain the comment fix. Alexey Kuznetsov has explained the situation as follows: -------------------- I think the fix is incorrect. Look, the RFC function init_cwnd(mss) is not continuous: f.e. for mss=1095 it needs initial window 1095*4, but for mss=1096 it is 1096*3. We do not know exactly what mss sender used for calculations. If we advertised 1096 (and calculate initial window 3*1096), the sender could limit it to some value < 1096 and then it will need window his_mss*4 > 3*1096 to send initial burst. See? So, the honest function for inital rcv_wnd derived from tcp_init_cwnd() is: init_rcv_wnd(mss)= min { init_cwnd(mss1)*mss1 for mss1 <= mss } It is something sort of: if (mss < 1096) return mss*4; if (mss < 1096*2) return 1096*4; return mss*2; (I just scrablled a graph of piece of paper, it is difficult to see or to explain without this) I selected it differently giving more window than it is strictly required. Initial receive window must be large enough to allow sender following to the rfc (or just setting initial cwnd to 2) to send initial burst. But besides that it is arbitrary, so I decided to give slack space of one segment. Actually, the logic was: If mss is low/normal (<=ethernet), set window to receive more than initial burst allowed by rfc under the worst conditions i.e. mss*4. This gives slack space of 1 segment for ethernet frames. For msses slighlty more than ethernet frame, take 3. Try to give slack space of 1 frame again. If mss is huge, force 2*mss. No slack space. Value 1460*3 is really confusing. Minimal one is 1096*2, but besides that it is an arbitrary value. It was meant to be ~4096. 1460*3 is just the magic number from RFC, 1460*3 = 1095*4 is the magic :-), so that I guess hands typed this themselves. -------------------- Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-29Merge master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6Linus Torvalds
2005-09-29[PATCH] proc_mkdir() should be used to create procfs directoriesAl Viro
A bunch of create_proc_dir_entry() calls creating directories had crept in since the last sweep; converted to proc_mkdir(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-28[NET]: Fix reversed logic in eth_type_trans().David S. Miller
I got the second compare_eth_addr() test reversed, oops. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-28[ATM]: fix bug in atm address list handlingMartin Whitaker
From: Martin Whitaker <atm@martin-whitaker.co.uk> Signed-off-by: Chas Williams <chas@cmf.nrl.navy.mil>
2005-09-28[ATM]: track and close listen sockets when sigd exitsChas Williams
Signed-off-by: Chas Williams <chas@cmf.nrl.navy.mil>
2005-09-28[ATM]: net/atm/ioctl.c: autoload pppoatm and br2684Roman Kagan
Signed-off-by: Roman Kagan <rkagan@mail.ru> Signed-off-by: Chas Williams <chas@cmf.nrl.navy.mil>
2005-09-28[TCP]: Fix init_cwnd calculations in tcp_select_initial_window()David S. Miller
Match it up to what RFC2414 really specifies. Noticed by Rick Jones. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27[APPLETALK]: Fix broadcast bug.Oliver Dawid
From: Oliver Dawid <oliver@helios.de> we found a bug in net/appletalk/ddp.c concerning broadcast packets. In kernel 2.4 it was working fine. The bug first occured 4 years ago when switching to new SNAP layer handling. This bug can be splitted up into a sending(1) and reception(2) problem: Sending(1) In kernel 2.4 broadcast packets were sent to a matching ethernet device and atalk_rcv() was called to receive it as "loopback" (so loopback packets were shortcutted and handled in DDP layer). When switching to the new SNAP structure, this shortcut was removed and the loopback packet was send to SNAP layer. The author forgot to replace the remote device pointer by the loopback device pointer before sending the packet to SNAP layer (by calling ddp_dl->request() ) therfor the packet was not sent back by underlying layers to ddp's atalk_rcv(). Reception(2) In atalk_rcv() a packet received by this loopback mechanism contains now the (rigth) loopback device pointer (in Kernel 2.4 it was the (wrong) remote ethernet device pointer) and therefor no matching socket will be found to deliver this packet to. Because a broadcast packet should be send to the first matching socket (as it is done in many other protocols (?)), we removed the network comparison in broadcast case. Below you will find a patch to correct this bug. Its diffed to kernel 2.6.14-rc1 Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27[NET]: Slightly optimize ethernet address comparison.David S. Miller
We know the thing is at least 2-byte aligned, so take advantage of that instead of invoking memcmp() which results in truly horrifically inefficient code because it can't assume anything about alignment. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27[ROSE]: fix typo (regeistration)Alexey Dobriyan
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27[ROSE]: check rose_ndevs earlierAlexey Dobriyan
* Don't bother with proto registering if rose_ndevs is bad. * Make escape structure more coherent. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27[ROSE]: return sane -E* from rose_proto_init()Alexey Dobriyan
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27[ROSE]: do proto_unregister() on exit pathsAlexey Dobriyan
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27[NET]: Fix module reference counts for loadable protocol modulesFrank Filz
I have been experimenting with loadable protocol modules, and ran into several issues with module reference counting. The first issue was that __module_get failed at the BUG_ON check at the top of the routine (checking that my module reference count was not zero) when I created the first socket. When sk_alloc() is called, my module reference count was still 0. When I looked at why sctp didn't have this problem, I discovered that sctp creates a control socket during module init (when the module ref count is not 0), which keeps the reference count non-zero. This section has been updated to address the point Stephen raised about checking the return value of try_module_get(). The next problem arose when my socket init routine returned an error. This resulted in my module reference count being decremented below 0. My socket ops->release routine was also being called. The issue here is that sock_release() calls the ops->release routine and decrements the ref count if sock->ops is not NULL. Since the socket probably didn't get correctly initialized, this should not be done, so we will set sock->ops to NULL because we will not call try_module_get(). While searching for another bug, I also noticed that sys_accept() has a possibility of doing a module_put() when it did not do an __module_get so I re-ordered the call to security_socket_accept(). Signed-off-by: Frank Filz <ffilzlnx@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27[NET]: Prefetch dev->qdisc_lock in dev_queue_xmit()Eric Dumazet
We know the lock is going to be taken. Signed-off-by: Eric Dumazet <dada1@cosmosbay.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27[NET]: Use non-recursive algorithm in skb_copy_datagram_iovec()Daniel Phillips
Use iteration instead of recursion. Fraglists within fraglists should never occur, so we BUG check this. Signed-off-by: Daniel Phillips <phillips@istop.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27[NEIGH]: Add debugging check when adding timers.David S. Miller
If we double-add a neighbour entry timer, which should be impossible but has been reported, dump the current state of the entry so that we can debug this. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-26Merge master.kernel.org:/pub/scm/linux/kernel/git/acme/llc-2.6David S. Miller
2005-09-26[NETFILTER]: Fix invalid module autoloading by splitting iptable_natHarald Welte
When you've enabled conntrack and NAT as a module (standard case in all distributions), and you've also enabled the new conntrack netlink interface, loading ip_conntrack_netlink.ko will auto-load iptable_nat.ko. This causes a huge performance penalty, since for every packet you iterate the nat code, even if you don't want it. This patch splits iptable_nat.ko into the NAT core (ip_nat.ko) and the iptables frontend (iptable_nat.ko). Threfore, ip_conntrack_netlink.ko will only pull ip_nat.ko, but not the frontend. ip_nat.ko will "only" allocate some resources, but not affect runtime performance. This separation is also a nice step in anticipation of new packet filters (nf-hipac, ipset, pkttables) being able to use the NAT core. Signed-off-by: Harald Welte <laforge@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-26[AF_PACKET]: Remove bogus checks added to packet_sendmsg().David S. Miller
These broke existing apps, and the checks are superfluous as the values being verified aren't even used. Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-26[IPV6]: Fix [Bug 5306] Oops on IPv6 route lookupHerbert Xu
> Steps to reproduce: > 1. Boot Linux, do NOT setup any IPv6 routes > 2. ip route get 2001::1 (or any unroutable address) Well caught. We never set rt6i_idev on ip6_null_entry. This patch should make the problem go away. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-26[NET]: Make sure ctl buffer is aligned properly in sys_sendmsg().Alex Williamson
It's on the stack and declared as "unsigned char[]", but pointers and similar can be in here thus we need to give it an explicit alignment attribute. Signed-off-by: Alex Williamson <alex.williamson@hp.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-24[NETFILTER] ip_conntrack: Update event cache when status changesHarald Welte
The GRE, SCTP and TCP protocol helpers did not call ip_conntrack_event_cache() when updating ct->status. This patch adds the respective calls. Signed-off-by: Harald Welte <laforge@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-24[IRDA]: *irttp cleanupAlexey Dobriyan
* Remove useless comment. * Remove useless assertions. * Remove useless comparison. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-24[IRDA]: Fix memory leak in irttp_init()Alexey Dobriyan
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-24[NET]: Protect neigh_stat_seq_fops by CONFIG_PROC_FSAmos Waterland
From: Amos Waterland <apw@us.ibm.com> If CONFIG_PROC_FS is not selected, the compiler emits this warning: net/core/neighbour.c:64: warning: `neigh_stat_seq_fops' defined but not used Which is correct, because neigh_stat_seq_fops is in fact only initialized and used by code that is protected by CONFIG_PROC_FS. So this patch fixes that up. Signed-off-by: Amos Waterland <apw@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-24[NETFILTER]: Fix ip[6]t_NFQUEUE Kconfig dependencyHarald Welte
We have to introduce a separate Kconfig menu entry for the NFQUEUE targets. They cannot "just" depend on nfnetlink_queue, since nfnetlink_queue could be linked into the kernel, whereas iptables can be a module. Signed-off-by: Harald Welte <laforge@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22[SCTP]: Fix SCTP_SHUTDOWN notifications.Sridhar Samudrala
Fix to allow SCTP_SHUTDOWN notifications to be received on 1-1 style SCTP SOCK_STREAM sockets. Add SCTP_SHUTDOWN notification to the receive queue before updating the state of the association. Signed-off-by: Sridhar Samudrala <sri@us.ibm.com> Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22[NETFILTER] Fix conntrack event cache deadlock/oopsHarald Welte
This patch fixes a number of bugs. It cannot be reasonably split up in multiple fixes, since all bugs interact with each other and affect the same function: Bug #1: The event cache code cannot be called while a lock is held. Therefore, the call to ip_conntrack_event_cache() within ip_ct_refresh_acct() needs to be moved outside of the locked section. This fixes a number of 2.6.14-rcX oops and deadlock reports. Bug #2: We used to call ct_add_counters() for unconfirmed connections without holding a lock. Since the add operations are not atomic, we could race with another CPU. Bug #3: ip_ct_refresh_acct() lost REFRESH events in some cases where refresh (and the corresponding event) are desired, but no accounting shall be performed. Both, evenst and accounting implicitly depended on the skb parameter bein non-null. We now re-introduce a non-accounting "ip_ct_refresh()" variant to explicitly state the desired behaviour. Signed-off-by: Harald Welte <laforge@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22[NETFILTER] Fix sparse endian warnings in pptp helperAlexey Dobriyan
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Harald Welte <laforge@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22[NETFILTER] fix DEBUG statement in PPTP helperHarald Welte
As noted by Alexey Dobriyan, the DEBUGP statement prints the wrong callID. Signed-off-by: Harald Welte <laforge@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22[BRIDGE]: TSO fix in br_dev_queue_push_xmitVlad Drukker
Signed-off-by: Vlad Drukker <vlad@storewiz.com> Acked-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22[TCP]: Adjust Reno SACK estimate in tcp_fragmentHerbert Xu
Since the introduction of TSO pcount a year ago, it has been possible for tcp_fragment() to cause packets_out to decrease. Prior to that, tcp_retrans_try_collapse() was the only way for that to happen on the retransmission path. When this happens with Reno, it is possible for sasked_out to become invalid because it is only an estimate and not tied to any particular packet on the retransmission queue. Therefore we need to adjust sacked_out as well as left_out in the Reno case. The following patch does exactly that. This bug is pretty difficult to trigger in practice though since you need a SACKless peer with a retransmission that occurs just as the cached MTU value expires. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22[LLC]: fix llc_ui_recvmsg, making it behave like tcp_recvmsgArnaldo Carvalho de Melo
In fact it is an exact copy of the parts that makes sense to LLC :-) Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
2005-09-22[LLC]: Fix the accept pathArnaldo Carvalho de Melo
Borrowing the structure of TCP/IP for this. On the receive of new connections I was bh_lock_socking the _new_ sock, not the listening one, duh, now it survives the ssh connections storm I've been using to test this specific bug. Also fixes send side skb sock accounting. Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>