diff options
author | Steven Whitehouse <swhiteho@redhat.com> | 2010-01-29 15:21:27 +0000 |
---|---|---|
committer | Steven Whitehouse <swhiteho@redhat.com> | 2010-02-03 09:56:21 +0000 |
commit | 8f05228ee7c8f409ae3c6f9c3e13d7ccb9c18360 (patch) | |
tree | 34e8cf87485edf4ecb6878ade96704975e5d5bf5 /fs/gfs2/glock.c | |
parent | e402746a945ceb9d0486a8e3d5917c9228fa4404 (diff) |
GFS2: Extend umount wait coverage to full glock lifetime
Although all glocks are, by the time of the umount glock wait,
scheduled for demotion, some of them haven't made it far
enough through the process for the original set of waiting
code to wait for them.
This extends the ref count to the whole glock lifetime in order
to ensure that the waiting does catch all glocks. It does make
it a bit more invasive, but it seems the only sensible solution
at the moment.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Diffstat (limited to 'fs/gfs2/glock.c')
-rw-r--r-- | fs/gfs2/glock.c | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c index f455a03a09e2..f42663325931 100644 --- a/fs/gfs2/glock.c +++ b/fs/gfs2/glock.c @@ -769,6 +769,7 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number, if (!gl) return -ENOMEM; + atomic_inc(&sdp->sd_glock_disposal); gl->gl_flags = 0; gl->gl_name = name; atomic_set(&gl->gl_ref, 1); @@ -1538,6 +1539,9 @@ void gfs2_gl_hash_clear(struct gfs2_sbd *sdp) up_write(&gfs2_umount_flush_sem); msleep(10); } + flush_workqueue(glock_workqueue); + wait_event(sdp->sd_glock_wait, atomic_read(&sdp->sd_glock_disposal) == 0); + gfs2_dump_lockstate(sdp); } void gfs2_glock_finish_truncate(struct gfs2_inode *ip) |