diff options
| author | Eric Dumazet <edumazet@google.com> | 2026-01-07 10:41:59 +0000 |
|---|---|---|
| committer | Paolo Abeni <pabeni@redhat.com> | 2026-01-13 10:12:11 +0100 |
| commit | ffe4ccd359d006eba559cb1a3c6113144b7fb38c (patch) | |
| tree | 00547b3df467260871b935d0fb3347f3c556c026 /net | |
| parent | dfdf774656205515b2d6ad94fce63c7ccbe92d91 (diff) | |
net: add net.core.qdisc_max_burst
In blamed commit, I added a check against the temporary queue
built in __dev_xmit_skb(). Idea was to drop packets early,
before any spinlock was acquired.
if (unlikely(defer_count > READ_ONCE(q->limit))) {
kfree_skb_reason(skb, SKB_DROP_REASON_QDISC_DROP);
return NET_XMIT_DROP;
}
It turned out that HTB Qdisc has a zero q->limit.
HTB limits packets on a per-class basis.
Some of our tests became flaky.
Add a new sysctl : net.core.qdisc_max_burst to control
how many packets can be stored in the temporary lockless queue.
Also add a new QDISC_BURST_DROP drop reason to better diagnose
future issues.
Thanks Neal !
Fixes: 100dfa74cad9 ("net: dev_queue_xmit() llist adoption")
Reported-and-bisected-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Link: https://patch.msgid.link/20260107104159.3669285-1-edumazet@google.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Diffstat (limited to 'net')
| -rw-r--r-- | net/core/dev.c | 6 | ||||
| -rw-r--r-- | net/core/hotdata.c | 1 | ||||
| -rw-r--r-- | net/core/sysctl_net_core.c | 7 |
3 files changed, 11 insertions, 3 deletions
diff --git a/net/core/dev.c b/net/core/dev.c index 9af9c3df452f..ccef685023c2 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4203,8 +4203,8 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q, do { if (first_n && !defer_count) { defer_count = atomic_long_inc_return(&q->defer_count); - if (unlikely(defer_count > READ_ONCE(q->limit))) { - kfree_skb_reason(skb, SKB_DROP_REASON_QDISC_DROP); + if (unlikely(defer_count > READ_ONCE(net_hotdata.qdisc_max_burst))) { + kfree_skb_reason(skb, SKB_DROP_REASON_QDISC_BURST_DROP); return NET_XMIT_DROP; } } @@ -4222,7 +4222,7 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q, ll_list = llist_del_all(&q->defer_list); /* There is a small race because we clear defer_count not atomically * with the prior llist_del_all(). This means defer_list could grow - * over q->limit. + * over qdisc_max_burst. */ atomic_long_set(&q->defer_count, 0); diff --git a/net/core/hotdata.c b/net/core/hotdata.c index dddd5c287cf0..a6db36580817 100644 --- a/net/core/hotdata.c +++ b/net/core/hotdata.c @@ -17,6 +17,7 @@ struct net_hotdata net_hotdata __cacheline_aligned = { .tstamp_prequeue = 1, .max_backlog = 1000, + .qdisc_max_burst = 1000, .dev_tx_weight = 64, .dev_rx_weight = 64, .sysctl_max_skb_frags = MAX_SKB_FRAGS, diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c index 8d4decb2606f..05dd55cf8b58 100644 --- a/net/core/sysctl_net_core.c +++ b/net/core/sysctl_net_core.c @@ -430,6 +430,13 @@ static struct ctl_table net_core_table[] = { .proc_handler = proc_dointvec }, { + .procname = "qdisc_max_burst", + .data = &net_hotdata.qdisc_max_burst, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec + }, + { .procname = "netdev_rss_key", .data = &netdev_rss_key, .maxlen = sizeof(int), |
