summaryrefslogtreecommitdiff
path: root/mm/filemap.c
diff options
context:
space:
mode:
authorWu Fengguang <wfg@mail.ustc.edu.cn>2006-06-25 05:48:43 -0700
committerLinus Torvalds <torvalds@g5.osdl.org>2006-06-25 10:01:17 -0700
commit76d42bd96984832c4ea8bc8cbd74e496ac31409e (patch)
tree138fb5c39d671166485cf2e16e450332daeb7081 /mm/filemap.c
parent78dbe706e22f54bce61571ad837238382e1ba5f9 (diff)
downloadlinux-3.10-76d42bd96984832c4ea8bc8cbd74e496ac31409e.tar.gz
linux-3.10-76d42bd96984832c4ea8bc8cbd74e496ac31409e.tar.bz2
linux-3.10-76d42bd96984832c4ea8bc8cbd74e496ac31409e.zip
[PATCH] readahead: backoff on I/O error
Backoff readahead size exponentially on I/O error. Michael Tokarev <mjt@tls.msk.ru> described the problem as: [QUOTE] Suppose there's a CD-rom with a scratch/etc, one sector is unreadable. In order to "fix" it, one have to read it and write to another CD-rom, or something.. or just ignore the error (if it's just a skip in a video stream). Let's assume the unreadable block is number U. But current behavior is just insane. An application requests block number N, which is before U. Kernel tries to read-ahead blocks N..U. Cdrom drive tries to read it, re-read it.. for some time. Finally, when all the N..U-1 blocks are read, kernel returns block number N (as requested) to an application, successefully. Now an app requests block number N+1, and kernel tries to read blocks N+1..U+1. Retrying again as in previous step. And so on, up to when an app requests block number U-1. And when, finally, it requests block U, it receives read error. So, kernel currentry tries to re-read the same failing block as many times as the current readahead value (256 (times?) by default). This whole process already killed my cdrom drive (I posted about it to LKML several months ago) - literally, the drive has fried, and does not work anymore. Ofcourse that problem was a bug in firmware (or whatever) of the drive *too*, but.. main problem with that is current readahead logic as described above. [/QUOTE] Which was confirmed by Jens Axboe <axboe@suse.de>: [QUOTE] For ide-cd, it tends do only end the first part of the request on a medium error. So you may see a lot of repeats :/ [/QUOTE] With this patch, retries are expected to be reduced from, say, 256, to 5. [akpm@osdl.org: cleanups] Signed-off-by: Wu Fengguang <wfg@mail.ustc.edu.cn> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm/filemap.c')
-rw-r--r--mm/filemap.c28
1 files changed, 28 insertions, 0 deletions
diff --git a/mm/filemap.c b/mm/filemap.c
index 1ed4be2a765..9c7334bafda 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -828,6 +828,32 @@ grab_cache_page_nowait(struct address_space *mapping, unsigned long index)
}
EXPORT_SYMBOL(grab_cache_page_nowait);
+/*
+ * CD/DVDs are error prone. When a medium error occurs, the driver may fail
+ * a _large_ part of the i/o request. Imagine the worst scenario:
+ *
+ * ---R__________________________________________B__________
+ * ^ reading here ^ bad block(assume 4k)
+ *
+ * read(R) => miss => readahead(R...B) => media error => frustrating retries
+ * => failing the whole request => read(R) => read(R+1) =>
+ * readahead(R+1...B+1) => bang => read(R+2) => read(R+3) =>
+ * readahead(R+3...B+2) => bang => read(R+3) => read(R+4) =>
+ * readahead(R+4...B+3) => bang => read(R+4) => read(R+5) => ......
+ *
+ * It is going insane. Fix it by quickly scaling down the readahead size.
+ */
+static void shrink_readahead_size_eio(struct file *filp,
+ struct file_ra_state *ra)
+{
+ if (!ra->ra_pages)
+ return;
+
+ ra->ra_pages /= 4;
+ printk(KERN_WARNING "Reducing readahead size to %luK\n",
+ ra->ra_pages << (PAGE_CACHE_SHIFT - 10));
+}
+
/**
* do_generic_mapping_read - generic file read routine
* @mapping: address_space to be read
@@ -985,6 +1011,7 @@ readpage:
}
unlock_page(page);
error = -EIO;
+ shrink_readahead_size_eio(filp, &ra);
goto readpage_error;
}
unlock_page(page);
@@ -1522,6 +1549,7 @@ page_not_uptodate:
* Things didn't work out. Return zero to tell the
* mm layer so, possibly freeing the page cache page first.
*/
+ shrink_readahead_size_eio(file, ra);
page_cache_release(page);
return NULL;
}