diff options
author | Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> | 2011-09-22 16:57:55 +0800 |
---|---|---|
committer | Avi Kivity <avi@redhat.com> | 2011-12-27 11:17:01 +0200 |
commit | 5d9ca30e96f567b67a36727aa4ebb34911a2b84a (patch) | |
tree | 2d045ff9ef170be43d6204e1a9eadf368726227c | |
parent | 889e5cbced6c191bb7e25c1b30b43e59a12561f9 (diff) | |
download | linux-3.10-5d9ca30e96f567b67a36727aa4ebb34911a2b84a.tar.gz linux-3.10-5d9ca30e96f567b67a36727aa4ebb34911a2b84a.tar.bz2 linux-3.10-5d9ca30e96f567b67a36727aa4ebb34911a2b84a.zip |
KVM: MMU: fix detecting misaligned accessed
Sometimes, we only modify the last one byte of a pte to update status bit,
for example, clear_bit is used to clear r/w bit in linux kernel and 'andb'
instruction is used in this function, in this case, kvm_mmu_pte_write will
treat it as misaligned access, and the shadow page table is zapped
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
-rw-r--r-- | arch/x86/kvm/mmu.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 986aea55366..ca6f72ab4c3 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3602,6 +3602,14 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa, offset = offset_in_page(gpa); pte_size = sp->role.cr4_pae ? 8 : 4; + + /* + * Sometimes, the OS only writes the last one bytes to update status + * bits, for example, in linux, andb instruction is used in clear_bit(). + */ + if (!(offset & (pte_size - 1)) && bytes == 1) + return false; + misaligned = (offset ^ (offset + bytes - 1)) & ~(pte_size - 1); misaligned |= bytes < 4; |