Patchwork [v2,2/2] kvm: mmu: Used range based flushing in slot_handle_level_range

login
register
mail settings
Submitter Ben Gardon
Date March 12, 2019, 6:45 p.m.
Message ID <20190312184559.187302-2-bgardon@google.com>
Download mbox | patch
Permalink /patch/747615/
State New
Headers show

Comments

Ben Gardon - March 12, 2019, 6:45 p.m.
Replace kvm_flush_remote_tlbs with kvm_flush_remote_tlbs_with_address
in slot_handle_level_range. When range based flushes are not enabled
kvm_flush_remote_tlbs_with_address falls back to kvm_flush_remote_tlbs.

This changes the behavior of many functions that indirectly use
slot_handle_level_range, iff the range based flushes are enabled. The
only potential problem I see with this is that kvm->tlbs_dirty will be
cleared less often, however the only caller of slot_handle_level_range that
checks tlbs_dirty is kvm_mmu_notifier_invalidate_range_start which
checks it and does a kvm_flush_remote_tlbs after calling
kvm_unmap_hva_range anyway.

Tested: Ran all kvm-unit-tests on a Intel Haswell machine with and
	without this patch. The patch introduced no new failures.

Signed-off-by: Ben Gardon <bgardon@google.com>
---
 arch/x86/kvm/mmu.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)
Paolo Bonzini - March 15, 2019, 5:55 p.m.
On 12/03/19 19:45, Ben Gardon wrote:
> Replace kvm_flush_remote_tlbs with kvm_flush_remote_tlbs_with_address
> in slot_handle_level_range. When range based flushes are not enabled
> kvm_flush_remote_tlbs_with_address falls back to kvm_flush_remote_tlbs.
> 
> This changes the behavior of many functions that indirectly use
> slot_handle_level_range, iff the range based flushes are enabled. The
> only potential problem I see with this is that kvm->tlbs_dirty will be
> cleared less often, however the only caller of slot_handle_level_range that
> checks tlbs_dirty is kvm_mmu_notifier_invalidate_range_start which
> checks it and does a kvm_flush_remote_tlbs after calling
> kvm_unmap_hva_range anyway.
> 
> Tested: Ran all kvm-unit-tests on a Intel Haswell machine with and
> 	without this patch. The patch introduced no new failures.
> 
> Signed-off-by: Ben Gardon <bgardon@google.com>
> ---
>  arch/x86/kvm/mmu.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 4cda5ee488454..9a403b7398f54 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -5501,7 +5501,9 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
>  
>  		if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
>  			if (flush && lock_flush_tlb) {
> -				kvm_flush_remote_tlbs(kvm);
> +				kvm_flush_remote_tlbs_with_address(kvm,
> +						start_gfn,
> +						iterator.gfn - start_gfn + 1);
>  				flush = false;
>  			}
>  			cond_resched_lock(&kvm->mmu_lock);
> @@ -5509,7 +5511,8 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
>  	}
>  
>  	if (flush && lock_flush_tlb) {
> -		kvm_flush_remote_tlbs(kvm);
> +		kvm_flush_remote_tlbs_with_address(kvm, start_gfn,
> +						   end_gfn - start_gfn + 1);
>  		flush = false;
>  	}
>  
> 

Queued 1/2, and 2/2 for after the merge window.

Paolo

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 4cda5ee488454..9a403b7398f54 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -5501,7 +5501,9 @@  slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
 
 		if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
 			if (flush && lock_flush_tlb) {
-				kvm_flush_remote_tlbs(kvm);
+				kvm_flush_remote_tlbs_with_address(kvm,
+						start_gfn,
+						iterator.gfn - start_gfn + 1);
 				flush = false;
 			}
 			cond_resched_lock(&kvm->mmu_lock);
@@ -5509,7 +5511,8 @@  slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	}
 
 	if (flush && lock_flush_tlb) {
-		kvm_flush_remote_tlbs(kvm);
+		kvm_flush_remote_tlbs_with_address(kvm, start_gfn,
+						   end_gfn - start_gfn + 1);
 		flush = false;
 	}