diff options
author | Li Zefan <lizf@cn.fujitsu.com> | 2011-10-10 15:43:34 -0400 |
---|---|---|
committer | Chris Mason <chris.mason@oracle.com> | 2011-10-10 15:43:34 -0400 |
commit | 2a0f7f5769992bae5b3f97157fd80b2b943be485 (patch) | |
tree | ee19a5470211f13d1e53a311cb0d9e2ccc2988fc /fs/btrfs | |
parent | b6316429af7f365f307dfd2b6a7a42f2563aef19 (diff) | |
download | linux-3.10-2a0f7f5769992bae5b3f97157fd80b2b943be485.tar.gz linux-3.10-2a0f7f5769992bae5b3f97157fd80b2b943be485.tar.bz2 linux-3.10-2a0f7f5769992bae5b3f97157fd80b2b943be485.zip |
Btrfs: fix recursive auto-defrag
Follow those steps:
# mount -o autodefrag /dev/sda7 /mnt
# dd if=/dev/urandom of=/mnt/tmp bs=200K count=1
# sync
# dd if=/dev/urandom of=/mnt/tmp bs=8K count=1 conv=notrunc
and then it'll go into a loop: writeback -> defrag -> writeback ...
It's because writeback writes [8K, 200K] and then writes [0, 8K].
I tried to make writeback know if the pages are dirtied by defrag,
but the patch was a bit intrusive. Here I simply set writeback_index
when we defrag a file.
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>
Diffstat (limited to 'fs/btrfs')
-rw-r--r-- | fs/btrfs/ioctl.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 6f89bcc4e55..df40b7c5f06 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -1055,6 +1055,13 @@ int btrfs_defrag_file(struct inode *inode, struct file *file, if (!max_to_defrag) max_to_defrag = last_index - 1; + /* + * make writeback starts from i, so the defrag range can be + * written sequentially. + */ + if (i < inode->i_mapping->writeback_index) + inode->i_mapping->writeback_index = i; + while (i <= last_index && defrag_count < max_to_defrag) { /* * make sure we stop running if someone unmounts |