diff options
author | NeilBrown <neilb@suse.de> | 2013-07-16 16:50:47 +1000 |
---|---|---|
committer | NeilBrown <neilb@suse.de> | 2013-07-18 14:18:01 +1000 |
commit | 7bb23c4934059c64cbee2e41d5d24ce122285176 (patch) | |
tree | e457c9dc9bfbc502f8a739749a6d6d287f2ec144 /drivers/md | |
parent | 47188d39b5deeebf41f87a02af1b3935866364cf (diff) | |
download | kernel-common-7bb23c4934059c64cbee2e41d5d24ce122285176.tar.gz kernel-common-7bb23c4934059c64cbee2e41d5d24ce122285176.tar.bz2 kernel-common-7bb23c4934059c64cbee2e41d5d24ce122285176.zip |
md/raid10: fix two problems with RAID10 resync.
1/ When an different between blocks is found, data is copied from
one bio to the other. However bv_len is used as the length to
copy and this could be zero. So use r10_bio->sectors to calculate
length instead.
Using bv_len was probably always a bit dubious, but the introduction
of bio_advance made it much more likely to be a problem.
2/ When preparing some blocks for sync, we don't set BIO_UPTODATE
except on bios that we schedule for a read. This ensures that
missing/failed devices don't confuse the loop at the top of
sync_request write.
Commit 8be185f2c9d54d6 "raid10: Use bio_reset()"
removed a loop which set BIO_UPTDATE on all appropriate bios.
So we need to re-add that flag.
These bugs were introduced in 3.10, so this patch is suitable for
3.10-stable, and can remove a potential for data corruption.
Cc: stable@vger.kernel.org (3.10)
Reported-by: Brassow Jonathan <jbrassow@redhat.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Diffstat (limited to 'drivers/md')
-rw-r--r-- | drivers/md/raid10.c | 11 |
1 files changed, 9 insertions, 2 deletions
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index cd066b63bdaf..957a719e8c2f 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -2097,11 +2097,17 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio) * both 'first' and 'i', so we just compare them. * All vec entries are PAGE_SIZE; */ - for (j = 0; j < vcnt; j++) + int sectors = r10_bio->sectors; + for (j = 0; j < vcnt; j++) { + int len = PAGE_SIZE; + if (sectors < (len / 512)) + len = sectors * 512; if (memcmp(page_address(fbio->bi_io_vec[j].bv_page), page_address(tbio->bi_io_vec[j].bv_page), - fbio->bi_io_vec[j].bv_len)) + len)) break; + sectors -= len/512; + } if (j == vcnt) continue; atomic64_add(r10_bio->sectors, &mddev->resync_mismatches); @@ -3407,6 +3413,7 @@ static sector_t sync_request(struct mddev *mddev, sector_t sector_nr, if (bio->bi_end_io == end_sync_read) { md_sync_acct(bio->bi_bdev, nr_sectors); + set_bit(BIO_UPTODATE, &bio->bi_flags); generic_make_request(bio); } } |