diff options
author | Paolo Bonzini <pbonzini@redhat.com> | 2015-03-23 10:57:21 +0100 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2015-06-05 17:09:59 +0200 |
commit | 6f6a5ef3e429f92f987678ea8c396aab4dc6aa19 (patch) | |
tree | 0fb7cdd721ee6c7274b266f5e6398f43250a7413 | |
parent | ea8cb1a8d98f5e3822a23a7cecdb4add0f29178b (diff) | |
download | qemu-6f6a5ef3e429f92f987678ea8c396aab4dc6aa19.tar.gz qemu-6f6a5ef3e429f92f987678ea8c396aab4dc6aa19.tar.bz2 qemu-6f6a5ef3e429f92f987678ea8c396aab4dc6aa19.zip |
memory: include DIRTY_MEMORY_MIGRATION in the dirty log mask
The separate handling of DIRTY_MEMORY_MIGRATION, which does not
call log_start/log_stop callbacks when it changes in a region's
dirty logging mask, has caused several bugs.
One recent example is commit 4cc856f (kvm-all: Sync dirty-bitmap from
kvm before kvm destroy the corresponding dirty_bitmap, 2015-04-02).
Another performance problem is that KVM keeps tracking dirty pages
after a failed live migration, which causes bad performance due to
disallowing huge page mapping.
This patch removes the root cause of the problem by reporting
DIRTY_MEMORY_MIGRATION changes via log_start and log_stop.
Note that we now have to rebuild the FlatView when global dirty
logging is enabled or disabled; this ensures that log_start and
log_stop callbacks are invoked.
This will also be used to make the setting of bitmaps conditional.
In general, this patch lets users of the memory API ignore the
global state of dirty logging if they handle dirty logging
generically per region.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-rw-r--r-- | memory.c | 20 |
1 files changed, 18 insertions, 2 deletions
@@ -588,7 +588,7 @@ static void render_memory_region(FlatView *view, remain = clip.size; fr.mr = mr; - fr.dirty_log_mask = mr->dirty_log_mask; + fr.dirty_log_mask = memory_region_get_dirty_log_mask(mr); fr.romd_mode = mr->romd_mode; fr.readonly = readonly; @@ -1400,7 +1400,11 @@ bool memory_region_is_skip_dump(MemoryRegion *mr) uint8_t memory_region_get_dirty_log_mask(MemoryRegion *mr) { - return mr->dirty_log_mask; + uint8_t mask = mr->dirty_log_mask; + if (global_dirty_log) { + mask |= (1 << DIRTY_MEMORY_MIGRATION); + } + return mask; } bool memory_region_is_logging(MemoryRegion *mr, uint8_t client) @@ -1962,12 +1966,24 @@ void address_space_sync_dirty_bitmap(AddressSpace *as) void memory_global_dirty_log_start(void) { global_dirty_log = true; + MEMORY_LISTENER_CALL_GLOBAL(log_global_start, Forward); + + /* Refresh DIRTY_LOG_MIGRATION bit. */ + memory_region_transaction_begin(); + memory_region_update_pending = true; + memory_region_transaction_commit(); } void memory_global_dirty_log_stop(void) { global_dirty_log = false; + + /* Refresh DIRTY_LOG_MIGRATION bit. */ + memory_region_transaction_begin(); + memory_region_update_pending = true; + memory_region_transaction_commit(); + MEMORY_LISTENER_CALL_GLOBAL(log_global_stop, Reverse); } |