diff options
| author | Eric Dumazet <edumazet@google.com> | 2026-04-07 14:30:53 +0000 |
|---|---|---|
| committer | Jakub Kicinski <kuba@kernel.org> | 2026-04-08 19:18:52 -0700 |
| commit | ea25e03da7a79e0413f1606d4a407a97ed41628a (patch) | |
| tree | 74eed613fced494ff0348915a73972a8e2e1f06f /net/sched | |
| parent | dbc2bb4e8742068d3d3dc8ebb46d874e5fd953b8 (diff) | |
codel: annotate data-races in codel_dump_stats()
codel_dump_stats() only runs with RTNL held,
reading fields that can be changed in qdisc fast path.
Add READ_ONCE()/WRITE_ONCE() annotations.
Alternative would be to acquire the qdisc spinlock, but our long-term
goal is to make qdisc dump operations lockless as much as we can.
tc_codel_xstats fields don't need to be latched atomically,
otherwise this bug would have been caught earlier.
No change in kernel size:
$ scripts/bloat-o-meter -t vmlinux.0 vmlinux
add/remove: 0/0 grow/shrink: 1/1 up/down: 3/-1 (2)
Function old new delta
codel_qdisc_dequeue 2462 2465 +3
codel_dump_stats 250 249 -1
Total: Before=29739919, After=29739921, chg +0.00%
Fixes: 76e3cc126bb2 ("codel: Controlled Delay AQM")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Link: https://patch.msgid.link/20260407143053.1570620-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net/sched')
| -rw-r--r-- | net/sched/sch_codel.c | 22 |
1 files changed, 11 insertions, 11 deletions
diff --git a/net/sched/sch_codel.c b/net/sched/sch_codel.c index dc2be90666ff..317aae0ec7bd 100644 --- a/net/sched/sch_codel.c +++ b/net/sched/sch_codel.c @@ -85,7 +85,7 @@ static int codel_qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch, return qdisc_enqueue_tail(skb, sch); } q = qdisc_priv(sch); - q->drop_overlimit++; + WRITE_ONCE(q->drop_overlimit, q->drop_overlimit + 1); return qdisc_drop_reason(skb, sch, to_free, QDISC_DROP_OVERLIMIT); } @@ -221,18 +221,18 @@ static int codel_dump_stats(struct Qdisc *sch, struct gnet_dump *d) { const struct codel_sched_data *q = qdisc_priv(sch); struct tc_codel_xstats st = { - .maxpacket = q->stats.maxpacket, - .count = q->vars.count, - .lastcount = q->vars.lastcount, - .drop_overlimit = q->drop_overlimit, - .ldelay = codel_time_to_us(q->vars.ldelay), - .dropping = q->vars.dropping, - .ecn_mark = q->stats.ecn_mark, - .ce_mark = q->stats.ce_mark, + .maxpacket = READ_ONCE(q->stats.maxpacket), + .count = READ_ONCE(q->vars.count), + .lastcount = READ_ONCE(q->vars.lastcount), + .drop_overlimit = READ_ONCE(q->drop_overlimit), + .ldelay = codel_time_to_us(READ_ONCE(q->vars.ldelay)), + .dropping = READ_ONCE(q->vars.dropping), + .ecn_mark = READ_ONCE(q->stats.ecn_mark), + .ce_mark = READ_ONCE(q->stats.ce_mark), }; - if (q->vars.dropping) { - codel_tdiff_t delta = q->vars.drop_next - codel_get_time(); + if (st.dropping) { + codel_tdiff_t delta = READ_ONCE(q->vars.drop_next) - codel_get_time(); if (delta >= 0) st.drop_next = codel_time_to_us(delta); |
