Patchwork [1/2] kvm: arm: Clean up the checking for huge mapping

login
register
mail settings
Submitter Suzuki K Poulose
Date April 10, 2019, 3:23 p.m.
Message ID <1554909832-7169-2-git-send-email-suzuki.poulose@arm.com>
Download mbox | patch
Permalink /patch/769897/
State New
Headers show

Comments

Suzuki K Poulose - April 10, 2019, 3:23 p.m.
If we are checking whether the stage2 can map PAGE_SIZE,
we don't have to do the boundary checks as both the host
VMA and the guest memslots are page aligned. Bail the case
easily.

Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 virt/kvm/arm/mmu.c | 4 ++++
 1 file changed, 4 insertions(+)
Zenghui Yu - April 11, 2019, 1:48 a.m.
On 2019/4/10 23:23, Suzuki K Poulose wrote:
> If we are checking whether the stage2 can map PAGE_SIZE,
> we don't have to do the boundary checks as both the host
> VMA and the guest memslots are page aligned. Bail the case
> easily.
> 
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>   virt/kvm/arm/mmu.c | 4 ++++
>   1 file changed, 4 insertions(+)
> 
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index a39dcfd..6d73322 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1624,6 +1624,10 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot,
>   	hva_t uaddr_start, uaddr_end;
>   	size_t size;
>   
> +	/* The memslot and the VMA are guaranteed to be aligned to PAGE_SIZE */
> +	if (map_size == PAGE_SIZE)
> +		return true;
> +
>   	size = memslot->npages * PAGE_SIZE;
>   
>   	gpa_start = memslot->base_gfn << PAGE_SHIFT;
> 
We can do a comment clean up as well in this patch.

s/<< PAGE_SIZE/<< PAGE_SHIFT/


thanks,
zenghui
Suzuki K Poulose - April 11, 2019, 9:47 a.m.
On 04/11/2019 02:48 AM, Zenghui Yu wrote:
> 
> On 2019/4/10 23:23, Suzuki K Poulose wrote:
>> If we are checking whether the stage2 can map PAGE_SIZE,
>> we don't have to do the boundary checks as both the host
>> VMA and the guest memslots are page aligned. Bail the case
>> easily.
>>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> ---
>>   virt/kvm/arm/mmu.c | 4 ++++
>>   1 file changed, 4 insertions(+)
>>
>> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
>> index a39dcfd..6d73322 100644
>> --- a/virt/kvm/arm/mmu.c
>> +++ b/virt/kvm/arm/mmu.c
>> @@ -1624,6 +1624,10 @@ static bool 
>> fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot,
>>       hva_t uaddr_start, uaddr_end;
>>       size_t size;
>> +    /* The memslot and the VMA are guaranteed to be aligned to 
>> PAGE_SIZE */
>> +    if (map_size == PAGE_SIZE)
>> +        return true;
>> +
>>       size = memslot->npages * PAGE_SIZE;
>>       gpa_start = memslot->base_gfn << PAGE_SHIFT;
>>
> We can do a comment clean up as well in this patch.
> 
> s/<< PAGE_SIZE/<< PAGE_SHIFT/

Sure, I missed that. Will fix it in the next version.

Cheers
Suzuki

Patch

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index a39dcfd..6d73322 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1624,6 +1624,10 @@  static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot,
 	hva_t uaddr_start, uaddr_end;
 	size_t size;
 
+	/* The memslot and the VMA are guaranteed to be aligned to PAGE_SIZE */
+	if (map_size == PAGE_SIZE)
+		return true;
+
 	size = memslot->npages * PAGE_SIZE;
 
 	gpa_start = memslot->base_gfn << PAGE_SHIFT;