diff options
author | Denis V. Lunev <den@openvz.org> | 2015-05-12 17:30:56 +0300 |
---|---|---|
committer | Stefan Hajnoczi <stefanha@redhat.com> | 2015-05-22 09:37:33 +0100 |
commit | 459b4e66129d091a11e9886ecc15a8bf9f7f3d92 (patch) | |
tree | ff1fbde6cf0d62f8bf2518dd0bbee5507487726e /block/io.c | |
parent | 4196d2f0308cb1ae13ed450424ab7dfe154acda9 (diff) | |
download | qemu-459b4e66129d091a11e9886ecc15a8bf9f7f3d92.tar.gz qemu-459b4e66129d091a11e9886ecc15a8bf9f7f3d92.tar.bz2 qemu-459b4e66129d091a11e9886ecc15a8bf9f7f3d92.zip |
block: align bounce buffers to page
The following sequence
int fd = open(argv[1], O_RDWR | O_CREAT | O_DIRECT, 0644);
for (i = 0; i < 100000; i++)
write(fd, buf, 4096);
performs 5% better if buf is aligned to 4096 bytes.
The difference is quite reliable.
On the other hand we do not want at the moment to enforce bounce
buffering if guest request is aligned to 512 bytes.
The patch changes default bounce buffer optimal alignment to
MAX(page size, 4k). 4k is chosen as maximal known sector size on real
HDD.
The justification of the performance improve is quite interesting.
From the kernel point of view each request to the disk was split
by two. This could be seen by blktrace like this:
9,0 11 1 0.000000000 11151 Q WS 312737792 + 1023 [qemu-img]
9,0 11 2 0.000007938 11151 Q WS 312738815 + 8 [qemu-img]
9,0 11 3 0.000030735 11151 Q WS 312738823 + 1016 [qemu-img]
9,0 11 4 0.000032482 11151 Q WS 312739839 + 8 [qemu-img]
9,0 11 5 0.000041379 11151 Q WS 312739847 + 1016 [qemu-img]
9,0 11 6 0.000042818 11151 Q WS 312740863 + 8 [qemu-img]
9,0 11 7 0.000051236 11151 Q WS 312740871 + 1017 [qemu-img]
9,0 5 1 0.169071519 11151 Q WS 312741888 + 1023 [qemu-img]
After the patch the pattern becomes normal:
9,0 6 1 0.000000000 12422 Q WS 314834944 + 1024 [qemu-img]
9,0 6 2 0.000038527 12422 Q WS 314835968 + 1024 [qemu-img]
9,0 6 3 0.000072849 12422 Q WS 314836992 + 1024 [qemu-img]
9,0 6 4 0.000106276 12422 Q WS 314838016 + 1024 [qemu-img]
and the amount of requests sent to disk (could be calculated counting
number of lines in the output of blktrace) is reduced about 2 times.
Both qemu-img and qemu-io are affected while qemu-kvm is not. The guest
does his job well and real requests comes properly aligned (to page).
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1431441056-26198-3-git-send-email-den@openvz.org
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Diffstat (limited to 'block/io.c')
-rw-r--r-- | block/io.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/block/io.c b/block/io.c index e6c363921d..b3ea95c74f 100644 --- a/block/io.c +++ b/block/io.c @@ -205,7 +205,7 @@ void bdrv_refresh_limits(BlockDriverState *bs, Error **errp) bs->bl.opt_mem_alignment = bs->file->bl.opt_mem_alignment; } else { bs->bl.min_mem_alignment = 512; - bs->bl.opt_mem_alignment = 512; + bs->bl.opt_mem_alignment = getpagesize(); } if (bs->backing_hd) { |