summaryrefslogtreecommitdiff
path: root/include/uapi/linux
diff options
context:
space:
mode:
authorDmitry Skorodumov <dskr99@gmail.com>2026-01-12 17:24:06 +0300
committerJakub Kicinski <kuba@kernel.org>2026-01-19 10:03:30 -0800
commitd3ba32162488283c0a4c5bedd8817aec91748802 (patch)
treee7da04ae7d29d2a8c3be93dff91086a36c55f802 /include/uapi/linux
parent7a29f6bf60f2590fe5e9c4decb451e19afad2bcf (diff)
ipvlan: Make the addrs_lock be per port
Make the addrs_lock be per port, not per ipvlan dev. Initial code seems to be written in the assumption, that any address change must occur under RTNL. But it is not so for the case of IPv6. So 1) Introduce per-port addrs_lock. 2) It was needed to fix places where it was forgotten to take lock (ipvlan_open/ipvlan_close) This appears to be a very minor problem though. Since it's highly unlikely that ipvlan_add_addr() will be called on 2 CPU simultaneously. But nevertheless, this could cause: 1) False-negative of ipvlan_addr_busy(): one interface iterated through all port->ipvlans + ipvlan->addrs under some ipvlan spinlock, and another added IP under its own lock. Though this is only possible for IPv6, since looks like only ipvlan_addr6_event() can be called without rtnl_lock. 2) Race since ipvlan_ht_addr_add(port) is called under different ipvlan->addrs_lock locks This should not affect performance, since add/remove IP is a rare situation and spinlock is not taken on fast paths. Fixes: 8230819494b3 ("ipvlan: use per device spinlock to protect addrs list updates") Signed-off-by: Dmitry Skorodumov <skorodumov.dmitry@huawei.com> Reviewed-by: Paolo Abeni <pabeni@redhat.com> Link: https://patch.msgid.link/20260112142417.4039566-2-skorodumov.dmitry@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'include/uapi/linux')
0 files changed, 0 insertions, 0 deletions