diff options
| author | Jakub Kicinski <kuba@kernel.org> | 2025-02-19 19:05:30 -0800 |
|---|---|---|
| committer | Jakub Kicinski <kuba@kernel.org> | 2025-02-19 19:05:31 -0800 |
| commit | 22af030f01f9a0fe7fde73970df6632f7d9c47fd (patch) | |
| tree | 36219ec22b92aa151525292a16cc1ef06d4c3b3c /include/net/sock.h | |
| parent | 9a6c2b2bdd5ed46f3ab364c975ea7b772b29aec2 (diff) | |
| parent | e0ca4057e0ecd4b10f27892fe6f1ac2a7fd25ab4 (diff) | |
Merge branch 'mptcp-rx-path-refactor'
Matthieu Baerts says:
====================
mptcp: rx path refactor
Paolo worked on this RX path refactor for these two main reasons:
- Currently, the MPTCP RX path introduces quite a bit of 'exceptional'
accounting/locking processing WRT to plain TCP, adding up to the
implementation complexity in a miserable way.
- The performance gap WRT plain TCP for single subflow connections is
quite measurable.
The present refactor addresses both the above items: most of the
additional complexity is dropped, and single stream performances
increase measurably, from 55Gbps to 71Gbps in Paolo's loopback test.
As a reference, plain TCP was around 84Gbps on the same host.
The above comes to a price: the patch are invasive, even in subtle ways.
Note: patch 5/7 removes the sk_forward_alloc_get() helper, which caused
some trivial modifications in different places in the net tree: sockets,
IPv4, sched. That's why a few more people have been Cc here. Feel free
to only look at this patch 5/7.
====================
Link: https://patch.msgid.link/20250218-net-next-mptcp-rx-path-refactor-v1-0-4a47d90d7998@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'include/net/sock.h')
| -rw-r--r-- | include/net/sock.h | 13 |
1 files changed, 0 insertions, 13 deletions
diff --git a/include/net/sock.h b/include/net/sock.h index fac65ed30983..edbb870e3f86 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1285,10 +1285,6 @@ struct proto { unsigned int inuse_idx; #endif -#if IS_ENABLED(CONFIG_MPTCP) - int (*forward_alloc_get)(const struct sock *sk); -#endif - bool (*stream_memory_free)(const struct sock *sk, int wake); bool (*sock_is_readable)(struct sock *sk); /* Memory pressure */ @@ -1349,15 +1345,6 @@ int sock_load_diag_module(int family, int protocol); INDIRECT_CALLABLE_DECLARE(bool tcp_stream_memory_free(const struct sock *sk, int wake)); -static inline int sk_forward_alloc_get(const struct sock *sk) -{ -#if IS_ENABLED(CONFIG_MPTCP) - if (sk->sk_prot->forward_alloc_get) - return sk->sk_prot->forward_alloc_get(sk); -#endif - return READ_ONCE(sk->sk_forward_alloc); -} - static inline bool __sk_stream_memory_free(const struct sock *sk, int wake) { if (READ_ONCE(sk->sk_wmem_queued) >= READ_ONCE(sk->sk_sndbuf)) |
