diff options
| author | Eric Dumazet <edumazet@google.com> | 2026-01-07 10:41:59 +0000 |
|---|---|---|
| committer | Paolo Abeni <pabeni@redhat.com> | 2026-01-13 10:12:11 +0100 |
| commit | ffe4ccd359d006eba559cb1a3c6113144b7fb38c (patch) | |
| tree | 00547b3df467260871b935d0fb3347f3c556c026 /net/core/dev.c | |
| parent | dfdf774656205515b2d6ad94fce63c7ccbe92d91 (diff) | |
net: add net.core.qdisc_max_burst
In blamed commit, I added a check against the temporary queue
built in __dev_xmit_skb(). Idea was to drop packets early,
before any spinlock was acquired.
if (unlikely(defer_count > READ_ONCE(q->limit))) {
kfree_skb_reason(skb, SKB_DROP_REASON_QDISC_DROP);
return NET_XMIT_DROP;
}
It turned out that HTB Qdisc has a zero q->limit.
HTB limits packets on a per-class basis.
Some of our tests became flaky.
Add a new sysctl : net.core.qdisc_max_burst to control
how many packets can be stored in the temporary lockless queue.
Also add a new QDISC_BURST_DROP drop reason to better diagnose
future issues.
Thanks Neal !
Fixes: 100dfa74cad9 ("net: dev_queue_xmit() llist adoption")
Reported-and-bisected-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Link: https://patch.msgid.link/20260107104159.3669285-1-edumazet@google.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Diffstat (limited to 'net/core/dev.c')
| -rw-r--r-- | net/core/dev.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/net/core/dev.c b/net/core/dev.c index 9af9c3df452f..ccef685023c2 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4203,8 +4203,8 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q, do { if (first_n && !defer_count) { defer_count = atomic_long_inc_return(&q->defer_count); - if (unlikely(defer_count > READ_ONCE(q->limit))) { - kfree_skb_reason(skb, SKB_DROP_REASON_QDISC_DROP); + if (unlikely(defer_count > READ_ONCE(net_hotdata.qdisc_max_burst))) { + kfree_skb_reason(skb, SKB_DROP_REASON_QDISC_BURST_DROP); return NET_XMIT_DROP; } } @@ -4222,7 +4222,7 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q, ll_list = llist_del_all(&q->defer_list); /* There is a small race because we clear defer_count not atomically * with the prior llist_del_all(). This means defer_list could grow - * over q->limit. + * over qdisc_max_burst. */ atomic_long_set(&q->defer_count, 0); |
