From c3921ab71507b108d51a0f1ee960f80cd668a93d Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Sun, 11 May 2008 16:04:48 -0700 Subject: Add new 'cond_resched_bkl()' helper function It acts exactly like a regular 'cond_resched()', but will not get optimized away when CONFIG_PREEMPT is set. Normal kernel code is already preemptable in the presense of CONFIG_PREEMPT, so cond_resched() is optimized away (see commit 02b67cc3ba36bdba351d6c3a00593f4ec550d9d3 "sched: do not do cond_resched() when CONFIG_PREEMPT"). But when wanting to conditionally reschedule while holding a lock, you need to use "cond_sched_lock(lock)", and the new function is the BKL equivalent of that. Also make fs/locks.c use it. Signed-off-by: Linus Torvalds --- fs/locks.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'fs/locks.c') diff --git a/fs/locks.c b/fs/locks.c index 0ac6b92cb0b6..11dbf08651b7 100644 --- a/fs/locks.c +++ b/fs/locks.c @@ -773,7 +773,7 @@ static int flock_lock_file(struct file *filp, struct file_lock *request) * give it the opportunity to lock the file. */ if (found) - cond_resched(); + cond_resched_bkl(); find_conflict: for_each_lock(inode, before) { -- cgit v1.2.3