summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorShaohua Li <shaohua.li@intel.com>2010-08-20 16:49:43 +0800
committerJan Kara <jack@suse.cz>2010-09-09 16:08:51 +0200
commitd530148ae8bffe1b33f50d1776d185a6e85dc774 (patch)
tree122385d0374529a040cef899233ee5ac7d43d726 /include
parentd56557af19867edb8c0e96f8e26399698a08857f (diff)
downloadlinux-3.10-d530148ae8bffe1b33f50d1776d185a6e85dc774.tar.gz
linux-3.10-d530148ae8bffe1b33f50d1776d185a6e85dc774.tar.bz2
linux-3.10-d530148ae8bffe1b33f50d1776d185a6e85dc774.zip
dquot: do full inode dirty in allocating space
Alex Shi found a regression when doing ffsb test. The test has several threads, and each thread creates a small file, write to it and then delete it. ffsb reports about 20% regression and Alex bisected it to 43d2932d88e4. The test will call __mark_inode_dirty 3 times. without this commit, we only take inode_lock one time, while with it, we take the lock 3 times with flags ( I_DIRTY_SYNC,I_DIRTY_PAGES,I_DIRTY). Perf shows the lock contention increased too much. Below proposed patch fixes it. fs is allocating blocks, which usually means file writes and the inode will be dirtied soon. We fully dirty the inode to reduce some inode_lock contention in several calls of __mark_inode_dirty. Jan Kara: Added comment. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Alex Shi <alex.shi@intel.com> Signed-off-by: Jan Kara <jack@suse.cz>
Diffstat (limited to 'include')
-rw-r--r--include/linux/quotaops.h10
1 files changed, 8 insertions, 2 deletions
diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h
index d50ba858cfe..d1a9193960f 100644
--- a/include/linux/quotaops.h
+++ b/include/linux/quotaops.h
@@ -274,8 +274,14 @@ static inline int dquot_alloc_space(struct inode *inode, qsize_t nr)
int ret;
ret = dquot_alloc_space_nodirty(inode, nr);
- if (!ret)
- mark_inode_dirty_sync(inode);
+ if (!ret) {
+ /*
+ * Mark inode fully dirty. Since we are allocating blocks, inode
+ * would become fully dirty soon anyway and it reportedly
+ * reduces inode_lock contention.
+ */
+ mark_inode_dirty(inode);
+ }
return ret;
}