Patchwork [v2,1/4] KVM: arm/arm64: vgic: Do not cond_resched_lock() with IRQs disabled

login
register
mail settings
Submitter Julien Thierry
Date Nov. 26, 2018, 6:26 p.m.
Message ID <1543256807-9768-2-git-send-email-julien.thierry@arm.com>
Download mbox | patch
Permalink /patch/665263/
State New
Headers show

Comments

Julien Thierry - Nov. 26, 2018, 6:26 p.m.
To change the active state of an MMIO, halt is requested for all vcpus of
the affected guest before modifying the IRQ state. This is done by calling
cond_resched_lock() in vgic_mmio_change_active(). However interrupts are
disabled at this point and we cannot reschedule a vcpu.

Solve this by waiting for all vcpus to be halted after emmiting the halt
request.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Suggested-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: stable@vger.kernel.org
---
 virt/kvm/arm/vgic/vgic-mmio.c | 36 ++++++++++++++----------------------
 1 file changed, 14 insertions(+), 22 deletions(-)
Sasha Levin - Nov. 27, 2018, 6:57 a.m.
Hi,

[This is an automated email]

This commit has been processed because it contains a -stable tag.
The stable tag indicates that it's relevant for the following trees: all

The bot has tested the following trees: v4.19.4, v4.14.83, v4.9.140, v4.4.164, v3.18.126, 

v4.19.4: Build OK!
v4.14.83: Failed to apply! Possible dependencies:
    006df0f34930 ("KVM: arm/arm64: Support calling vgic_update_irq_pending from irq context")
    53692908b0f5 ("KVM: arm/arm64: vgic: Fix source vcpu issues for GICv2 SGI")
    67b5b673ad4d ("KVM: arm/arm64: vgic: Disallow Active+Pending for level interrupts")
    6c1b7521f4a0 ("KVM: arm/arm64: Factor out functionality to get vgic mmio requester_vcpu")
    df635c5b184d ("KVM: arm/arm64: Support VGIC dist pend/active changes for mapped IRQs")
    e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
    f39d16cbabf9 ("KVM: arm/arm64: Guard kvm_vgic_map_is_active against !vgic_initialized")

v4.9.140: Failed to apply! Possible dependencies:
    006df0f34930 ("KVM: arm/arm64: Support calling vgic_update_irq_pending from irq context")
    2df903a89a81 ("KVM: arm/arm64: vgic: Implement support for userspace access")
    6c1b7521f4a0 ("KVM: arm/arm64: Factor out functionality to get vgic mmio requester_vcpu")
    8694e4da66a6 ("KVM: arm/arm64: Remove struct vgic_irq pending field")
    94574c9488e2 ("KVM: arm/arm64: vgic: Add distributor and redistributor access")
    9ce91c7234ff ("KVM: arm/arm64: vgic-its: rename itte into ite")
    d017d7b0bd7a ("KVM: arm/arm64: vgic: Implement VGICv3 CPU interface access")
    e96a006cb066 ("KVM: arm/arm64: vgic: Implement KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO ioctl")

v4.4.164: Failed to apply! Possible dependencies:
    05fb05a6ca25 ("KVM: arm/arm64: vgic-new: Removel harmful BUG_ON")
    0919e84c0fc1 ("KVM: arm/arm64: vgic-new: Add IRQ sync/flush framework")
    140b086dd197 ("KVM: arm/arm64: vgic-new: Add GICv2 world switch backend")
    35a2d58588f0 ("KVM: arm/arm64: vgic-new: Synchronize changes to active state")
    370a0ec18199 ("KVM: arm/arm64: Let vcpu thread modify its own active state")
    4493b1c4866a ("KVM: arm/arm64: vgic-new: Add MMIO handling framework")
    64a959d66e47 ("KVM: arm/arm64: vgic-new: Add acccessor to new struct vgic_irq instance")
    69b6fe0c6e7f ("KVM: arm/arm64: vgic-new: Add ACTIVE registers handlers")
    81eeb95ddbab ("KVM: arm/arm64: vgic-new: Implement virtual IRQ injection")
    8577370fb0cb ("KVM: Use simple waitqueue for vcpu->wq")
    96b298000db4 ("KVM: arm/arm64: vgic-new: Add PENDING registers handlers")
    b13216cf6010 ("KVM: arm/arm64: Provide functionality to pause and resume a guest")
    b18b57787f5e ("KVM: arm/arm64: vgic-new: Add data structure definitions")
    fb848db39661 ("KVM: arm/arm64: vgic-new: Add GICv2 MMIO handling framework")
    fd122e620983 ("KVM: arm/arm64: vgic-new: Add ENABLE registers handlers")

v3.18.126: Failed to apply! Possible dependencies:
    05bc8aafe664 ("arm/arm64: KVM: wrap 64 bit MMIO accesses with two 32 bit ones")
    35a2d58588f0 ("KVM: arm/arm64: vgic-new: Synchronize changes to active state")
    370a0ec18199 ("KVM: arm/arm64: Let vcpu thread modify its own active state")
    3caa2d8c3b2d ("arm/arm64: KVM: make the maximum number of vCPUs a per-VM value")
    59892136c40d ("arm/arm64: KVM: pass down user space provided GIC type into vGIC code")
    7f05db6a20fe ("kvm: drop unsupported capabilities, fix documentation")
    832158125d2e ("arm/arm64: KVM: add vgic.h header file")
    96415257a1bd ("arm/arm64: KVM: refactor vgic_handle_mmio() function")
    a0675c25d639 ("arm/arm64: KVM: add virtual GICv3 distributor emulation")
    ac3d373564d9 ("arm/arm64: KVM: allow userland to request a virtual GICv3")
    b13216cf6010 ("KVM: arm/arm64: Provide functionality to pause and resume a guest")
    b26e5fdac43c ("arm/arm64: KVM: introduce per-VM ops")
    c1426e4c5add ("KVM: arm/arm64: implement kvm_arch_intc_initialized")
    c32a42721ce6 ("kvm: Documentation: remove ia64")
    cc2d3216f53c ("irqchip: GICv3: ITS command queue")
    d97f683d0f4b ("arm/arm64: KVM: refactor MMIO accessors")
    ea2f83a7de9d ("arm/arm64: KVM: move kvm_register_device_ops() into vGIC probing")
    ef748917b529 ("arm/arm64: KVM: Remove 'config KVM_ARM_MAX_VCPUS'")
    f5c1434c217f ("irqchip: GICv3: rework redistributor structure")


How should we proceed with this patch?

--
Thanks,
Sasha
Sasha Levin - Nov. 27, 2018, 6:57 a.m.
Hi,

[This is an automated email]

This commit has been processed because it contains a -stable tag.
The stable tag indicates that it's relevant for the following trees: all

The bot has tested the following trees: v4.19.4, v4.14.83, v4.9.140, v4.4.164, v3.18.126, 

v4.19.4: Build OK!
v4.14.83: Failed to apply! Possible dependencies:
    006df0f34930 ("KVM: arm/arm64: Support calling vgic_update_irq_pending from irq context")
    53692908b0f5 ("KVM: arm/arm64: vgic: Fix source vcpu issues for GICv2 SGI")
    67b5b673ad4d ("KVM: arm/arm64: vgic: Disallow Active+Pending for level interrupts")
    6c1b7521f4a0 ("KVM: arm/arm64: Factor out functionality to get vgic mmio requester_vcpu")
    df635c5b184d ("KVM: arm/arm64: Support VGIC dist pend/active changes for mapped IRQs")
    e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
    f39d16cbabf9 ("KVM: arm/arm64: Guard kvm_vgic_map_is_active against !vgic_initialized")

v4.9.140: Failed to apply! Possible dependencies:
    006df0f34930 ("KVM: arm/arm64: Support calling vgic_update_irq_pending from irq context")
    2df903a89a81 ("KVM: arm/arm64: vgic: Implement support for userspace access")
    6c1b7521f4a0 ("KVM: arm/arm64: Factor out functionality to get vgic mmio requester_vcpu")
    8694e4da66a6 ("KVM: arm/arm64: Remove struct vgic_irq pending field")
    94574c9488e2 ("KVM: arm/arm64: vgic: Add distributor and redistributor access")
    9ce91c7234ff ("KVM: arm/arm64: vgic-its: rename itte into ite")
    d017d7b0bd7a ("KVM: arm/arm64: vgic: Implement VGICv3 CPU interface access")
    e96a006cb066 ("KVM: arm/arm64: vgic: Implement KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO ioctl")

v4.4.164: Failed to apply! Possible dependencies:
    05fb05a6ca25 ("KVM: arm/arm64: vgic-new: Removel harmful BUG_ON")
    0919e84c0fc1 ("KVM: arm/arm64: vgic-new: Add IRQ sync/flush framework")
    140b086dd197 ("KVM: arm/arm64: vgic-new: Add GICv2 world switch backend")
    35a2d58588f0 ("KVM: arm/arm64: vgic-new: Synchronize changes to active state")
    370a0ec18199 ("KVM: arm/arm64: Let vcpu thread modify its own active state")
    4493b1c4866a ("KVM: arm/arm64: vgic-new: Add MMIO handling framework")
    64a959d66e47 ("KVM: arm/arm64: vgic-new: Add acccessor to new struct vgic_irq instance")
    69b6fe0c6e7f ("KVM: arm/arm64: vgic-new: Add ACTIVE registers handlers")
    81eeb95ddbab ("KVM: arm/arm64: vgic-new: Implement virtual IRQ injection")
    8577370fb0cb ("KVM: Use simple waitqueue for vcpu->wq")
    96b298000db4 ("KVM: arm/arm64: vgic-new: Add PENDING registers handlers")
    b13216cf6010 ("KVM: arm/arm64: Provide functionality to pause and resume a guest")
    b18b57787f5e ("KVM: arm/arm64: vgic-new: Add data structure definitions")
    fb848db39661 ("KVM: arm/arm64: vgic-new: Add GICv2 MMIO handling framework")
    fd122e620983 ("KVM: arm/arm64: vgic-new: Add ENABLE registers handlers")

v3.18.126: Failed to apply! Possible dependencies:
    05bc8aafe664 ("arm/arm64: KVM: wrap 64 bit MMIO accesses with two 32 bit ones")
    35a2d58588f0 ("KVM: arm/arm64: vgic-new: Synchronize changes to active state")
    370a0ec18199 ("KVM: arm/arm64: Let vcpu thread modify its own active state")
    3caa2d8c3b2d ("arm/arm64: KVM: make the maximum number of vCPUs a per-VM value")
    59892136c40d ("arm/arm64: KVM: pass down user space provided GIC type into vGIC code")
    7f05db6a20fe ("kvm: drop unsupported capabilities, fix documentation")
    832158125d2e ("arm/arm64: KVM: add vgic.h header file")
    96415257a1bd ("arm/arm64: KVM: refactor vgic_handle_mmio() function")
    a0675c25d639 ("arm/arm64: KVM: add virtual GICv3 distributor emulation")
    ac3d373564d9 ("arm/arm64: KVM: allow userland to request a virtual GICv3")
    b13216cf6010 ("KVM: arm/arm64: Provide functionality to pause and resume a guest")
    b26e5fdac43c ("arm/arm64: KVM: introduce per-VM ops")
    c1426e4c5add ("KVM: arm/arm64: implement kvm_arch_intc_initialized")
    c32a42721ce6 ("kvm: Documentation: remove ia64")
    cc2d3216f53c ("irqchip: GICv3: ITS command queue")
    d97f683d0f4b ("arm/arm64: KVM: refactor MMIO accessors")
    ea2f83a7de9d ("arm/arm64: KVM: move kvm_register_device_ops() into vGIC probing")
    ef748917b529 ("arm/arm64: KVM: Remove 'config KVM_ARM_MAX_VCPUS'")
    f5c1434c217f ("irqchip: GICv3: rework redistributor structure")


How should we proceed with this patch?

--
Thanks,
Sasha
Christoffer Dall - Dec. 11, 2018, 10:20 a.m.
On Mon, Nov 26, 2018 at 06:26:44PM +0000, Julien Thierry wrote:
> To change the active state of an MMIO, halt is requested for all vcpus of
> the affected guest before modifying the IRQ state. This is done by calling
> cond_resched_lock() in vgic_mmio_change_active(). However interrupts are
> disabled at this point and we cannot reschedule a vcpu.
> 
> Solve this by waiting for all vcpus to be halted after emmiting the halt
> request.
> 
> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
> Suggested-by: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: stable@vger.kernel.org
> ---
>  virt/kvm/arm/vgic/vgic-mmio.c | 36 ++++++++++++++----------------------
>  1 file changed, 14 insertions(+), 22 deletions(-)
> 
> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> index f56ff1c..5c76a92 100644
> --- a/virt/kvm/arm/vgic/vgic-mmio.c
> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> @@ -313,27 +313,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
>  
>  	spin_lock_irqsave(&irq->irq_lock, flags);
>  
> -	/*
> -	 * If this virtual IRQ was written into a list register, we
> -	 * have to make sure the CPU that runs the VCPU thread has
> -	 * synced back the LR state to the struct vgic_irq.
> -	 *
> -	 * As long as the conditions below are true, we know the VCPU thread
> -	 * may be on its way back from the guest (we kicked the VCPU thread in
> -	 * vgic_change_active_prepare)  and still has to sync back this IRQ,
> -	 * so we release and re-acquire the spin_lock to let the other thread
> -	 * sync back the IRQ.
> -	 *
> -	 * When accessing VGIC state from user space, requester_vcpu is
> -	 * NULL, which is fine, because we guarantee that no VCPUs are running
> -	 * when accessing VGIC state from user space so irq->vcpu->cpu is
> -	 * always -1.
> -	 */
> -	while (irq->vcpu && /* IRQ may have state in an LR somewhere */
> -	       irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */
> -	       irq->vcpu->cpu != -1) /* VCPU thread is running */
> -		cond_resched_lock(&irq->irq_lock);
> -
>  	if (irq->hw) {
>  		vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu);
>  	} else {
> @@ -368,8 +347,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
>   */
>  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
>  {
> -	if (intid > VGIC_NR_PRIVATE_IRQS)
> +	if (intid > VGIC_NR_PRIVATE_IRQS) {
> +		struct kvm_vcpu *tmp;
> +		int i;
> +
>  		kvm_arm_halt_guest(vcpu->kvm);
> +
> +		/* Wait for each vcpu to be halted */
> +		kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
> +			if (tmp == vcpu)
> +				continue;
> +
> +			while (tmp->cpu != -1)
> +				cond_resched();
> +		}

I'm actually thinking we don't need this loop at all after the requet
rework which causes:

 1. kvm_arm_halt_guest() to use kvm_make_all_cpus_request(kvm, KVM_REQ_SLEEP), and
 2. KVM_REQ_SLEEP uses REQ_WAIT, and
 3. REQ_WAIT requires the VCPU to respond to IPIs before returning, and
 4. a VCPU thread can only respond when it enables interrupt, and
 5. enabling interrupts when running a VCPU only happens after syncing
    the VGIC hwstate.

Does that make sense?

It would be good if someone can validate this, but if it holds this
patch just becomes a nice deletion of the logic in
vgic-mmio_change_active.


Thanks,

    Christoffer
Julien Thierry - Dec. 14, 2018, 9:36 a.m.
On 11/12/2018 10:20, Christoffer Dall wrote:
> On Mon, Nov 26, 2018 at 06:26:44PM +0000, Julien Thierry wrote:
>> To change the active state of an MMIO, halt is requested for all vcpus of
>> the affected guest before modifying the IRQ state. This is done by calling
>> cond_resched_lock() in vgic_mmio_change_active(). However interrupts are
>> disabled at this point and we cannot reschedule a vcpu.
>>
>> Solve this by waiting for all vcpus to be halted after emmiting the halt
>> request.
>>
>> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
>> Suggested-by: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Christoffer Dall <christoffer.dall@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: stable@vger.kernel.org
>> ---
>>  virt/kvm/arm/vgic/vgic-mmio.c | 36 ++++++++++++++----------------------
>>  1 file changed, 14 insertions(+), 22 deletions(-)
>>
>> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
>> index f56ff1c..5c76a92 100644
>> --- a/virt/kvm/arm/vgic/vgic-mmio.c
>> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
>> @@ -313,27 +313,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
>>  
>>  	spin_lock_irqsave(&irq->irq_lock, flags);
>>  
>> -	/*
>> -	 * If this virtual IRQ was written into a list register, we
>> -	 * have to make sure the CPU that runs the VCPU thread has
>> -	 * synced back the LR state to the struct vgic_irq.
>> -	 *
>> -	 * As long as the conditions below are true, we know the VCPU thread
>> -	 * may be on its way back from the guest (we kicked the VCPU thread in
>> -	 * vgic_change_active_prepare)  and still has to sync back this IRQ,
>> -	 * so we release and re-acquire the spin_lock to let the other thread
>> -	 * sync back the IRQ.
>> -	 *
>> -	 * When accessing VGIC state from user space, requester_vcpu is
>> -	 * NULL, which is fine, because we guarantee that no VCPUs are running
>> -	 * when accessing VGIC state from user space so irq->vcpu->cpu is
>> -	 * always -1.
>> -	 */
>> -	while (irq->vcpu && /* IRQ may have state in an LR somewhere */
>> -	       irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */
>> -	       irq->vcpu->cpu != -1) /* VCPU thread is running */
>> -		cond_resched_lock(&irq->irq_lock);
>> -
>>  	if (irq->hw) {
>>  		vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu);
>>  	} else {
>> @@ -368,8 +347,21 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
>>   */
>>  static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
>>  {
>> -	if (intid > VGIC_NR_PRIVATE_IRQS)
>> +	if (intid > VGIC_NR_PRIVATE_IRQS) {
>> +		struct kvm_vcpu *tmp;
>> +		int i;
>> +
>>  		kvm_arm_halt_guest(vcpu->kvm);
>> +
>> +		/* Wait for each vcpu to be halted */
>> +		kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
>> +			if (tmp == vcpu)
>> +				continue;
>> +
>> +			while (tmp->cpu != -1)
>> +				cond_resched();
>> +		}
> 
> I'm actually thinking we don't need this loop at all after the requet
> rework which causes:
> 
>  1. kvm_arm_halt_guest() to use kvm_make_all_cpus_request(kvm, KVM_REQ_SLEEP), and
>  2. KVM_REQ_SLEEP uses REQ_WAIT, and
>  3. REQ_WAIT requires the VCPU to respond to IPIs before returning, and
>  4. a VCPU thread can only respond when it enables interrupt, and
>  5. enabling interrupts when running a VCPU only happens after syncing
>     the VGIC hwstate.
> 
> Does that make sense?

I'm not super familiar with what goes on with the vgic hwstate syncing,
but looking at kvm_arm_halt_guest() and kvm_arch_vcpu_ioctl_run(), I
agree with the reasoning.

> It would be good if someone can validate this, but if it holds this
> patch just becomes a nice deletion of the logic in
> vgic-mmio_change_active.
> 

As long as running kvm_vgic_sync_hwstate() on each vcpu is all that is
needed before we can modify the active state, I think your solution is
definitely the way to go.

Thanks,

Patch

diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index f56ff1c..5c76a92 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -313,27 +313,6 @@  static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 
 	spin_lock_irqsave(&irq->irq_lock, flags);
 
-	/*
-	 * If this virtual IRQ was written into a list register, we
-	 * have to make sure the CPU that runs the VCPU thread has
-	 * synced back the LR state to the struct vgic_irq.
-	 *
-	 * As long as the conditions below are true, we know the VCPU thread
-	 * may be on its way back from the guest (we kicked the VCPU thread in
-	 * vgic_change_active_prepare)  and still has to sync back this IRQ,
-	 * so we release and re-acquire the spin_lock to let the other thread
-	 * sync back the IRQ.
-	 *
-	 * When accessing VGIC state from user space, requester_vcpu is
-	 * NULL, which is fine, because we guarantee that no VCPUs are running
-	 * when accessing VGIC state from user space so irq->vcpu->cpu is
-	 * always -1.
-	 */
-	while (irq->vcpu && /* IRQ may have state in an LR somewhere */
-	       irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */
-	       irq->vcpu->cpu != -1) /* VCPU thread is running */
-		cond_resched_lock(&irq->irq_lock);
-
 	if (irq->hw) {
 		vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu);
 	} else {
@@ -368,8 +347,21 @@  static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
  */
 static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
 {
-	if (intid > VGIC_NR_PRIVATE_IRQS)
+	if (intid > VGIC_NR_PRIVATE_IRQS) {
+		struct kvm_vcpu *tmp;
+		int i;
+
 		kvm_arm_halt_guest(vcpu->kvm);
+
+		/* Wait for each vcpu to be halted */
+		kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
+			if (tmp == vcpu)
+				continue;
+
+			while (tmp->cpu != -1)
+				cond_resched();
+		}
+	}
 }
 
 /* See vgic_change_active_prepare */