diff options
author | Joe Lawrence <Joe.Lawrence@stratus.com> | 2014-10-03 09:58:34 -0400 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2014-10-15 08:36:42 +0200 |
commit | 5c4b226c22294a1746f386091da274016cbce394 (patch) | |
tree | e113f6f297694b0901511042bb02be4f168ec41a | |
parent | 79152e44ed24adc9eaf35469b7ef706368b4c2ca (diff) |
team: avoid race condition in scheduling delayed work
[ Upstream commit 47549650abd13d873fd2e5fc218db19e21031074 ]
When team_notify_peers and team_mcast_rejoin are called, they both reset
their respective .count_pending atomic variable. Then when the actual
worker function is executed, the variable is atomically decremented.
This pattern introduces a potential race condition where the
.count_pending rolls over and the worker function keeps rescheduling
until .count_pending decrements to zero again:
THREAD 1 THREAD 2
======== ========
team_notify_peers(teamX)
atomic_set count_pending = 1
schedule_delayed_work
team_notify_peers(teamX)
atomic_set count_pending = 1
team_notify_peers_work
atomic_dec_and_test
count_pending = 0
(return)
schedule_delayed_work
team_notify_peers_work
atomic_dec_and_test
count_pending = -1
schedule_delayed_work
(repeat until count_pending = 0)
Instead of assigning a new value to .count_pending, use atomic_add to
tack-on the additional desired worker function invocations.
Signed-off-by: Joe Lawrence <joe.lawrence@stratus.com>
Acked-by: Jiri Pirko <jiri@resnulli.us>
Fixes: fc423ff00df3a19554414ee ("team: add peer notification")
Fixes: 492b200efdd20b8fcfdac87 ("team: add support for sending multicast rejoins")
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r-- | drivers/net/team/team.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c index 26d8c29b59de..979fe433278c 100644 --- a/drivers/net/team/team.c +++ b/drivers/net/team/team.c @@ -647,7 +647,7 @@ static void team_notify_peers(struct team *team) { if (!team->notify_peers.count || !netif_running(team->dev)) return; - atomic_set(&team->notify_peers.count_pending, team->notify_peers.count); + atomic_add(team->notify_peers.count, &team->notify_peers.count_pending); schedule_delayed_work(&team->notify_peers.dw, 0); } @@ -687,7 +687,7 @@ static void team_mcast_rejoin(struct team *team) { if (!team->mcast_rejoin.count || !netif_running(team->dev)) return; - atomic_set(&team->mcast_rejoin.count_pending, team->mcast_rejoin.count); + atomic_add(team->mcast_rejoin.count, &team->mcast_rejoin.count_pending); schedule_delayed_work(&team->mcast_rejoin.dw, 0); } |