summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorDavid S. Miller <davem@davemloft.net>2023-12-17 10:56:33 +0000
committerDavid S. Miller <davem@davemloft.net>2023-12-17 10:56:33 +0000
commit3a3af3aedb00258f0bd49f260eabcea1d88108a1 (patch)
treed3b4e29600c5578796116a57600e473f954f85d5 /include
parent66fe896351d0505f529eefe4715f6c669f49cbd9 (diff)
parentf7dc3248dcfbdd81b5be64272f38b87a8e8085e7 (diff)
Merge branch 'skb-coalescing-page_pool'
Liang Chen says: ==================== skbuff: Optimize SKB coalescing for page pool The combination of the following condition was excluded from skb coalescing: from->pp_recycle = 1 from->cloned = 1 to->pp_recycle = 1 With page pool in use, this combination can be quite common(ex. NetworkMananger may lead to the additional packet_type being registered, thus the cloning). In scenarios with a higher number of small packets, it can significantly affect the success rate of coalescing. This patchset aims to optimize this scenario and enable coalescing of this particular combination. That also involves supporting multiple users referencing the same fragment of a pp page to accomondate the need to increment the "from" SKB page's pp page reference count. Changes from v10: - re-number patches to 1/3, 2/3, 3/3 Changes from v9: - patch 1 was already applied - imporve description for patch 2 - make sure skb_pp_frag_ref only work for pp aware skbs ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include')
-rw-r--r--include/net/page_pool/helpers.h5
1 files changed, 5 insertions, 0 deletions
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index ead2c0d24b2c..1d397c1a0043 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -277,6 +277,11 @@ static inline long page_pool_unref_page(struct page *page, long nr)
return ret;
}
+static inline void page_pool_ref_page(struct page *page)
+{
+ atomic_long_inc(&page->pp_ref_count);
+}
+
static inline bool page_pool_is_last_ref(struct page *page)
{
/* If page_pool_unref_page() returns 0, we were the last user */