Patchwork [v4,4/4] arm64: KVM: Enable support for :G/:H perf event modifiers

login
register
mail settings
Submitter Andrew Murray
Date Dec. 4, 2018, 2:58 p.m.
Message ID <1543935500-23207-5-git-send-email-andrew.murray@arm.com>
Download mbox | patch
Permalink /patch/672063/
State New
Headers show

Comments

Andrew Murray - Dec. 4, 2018, 2:58 p.m.
Enable/disable event counters as appropriate when entering and exiting
the guest to enable support for guest or host only event counting.

For both VHE and non-VHE we switch the counters between host/guest at
EL2. EL2 is filtered out by the PMU when we are using the :G modifier.

The PMU may be on when we change which counters are enabled however
we avoid adding an isb as we instead rely on existing context
synchronisation events: the isb in kvm_arm_vhe_guest_exit for VHE and
the eret from the hvc in kvm_call_hyp.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 arch/arm64/kvm/hyp/switch.c | 41 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)
Julien Thierry - Dec. 4, 2018, 4:16 p.m.
Hi Andrew,

On 04/12/18 14:58, Andrew Murray wrote:
> Enable/disable event counters as appropriate when entering and exiting
> the guest to enable support for guest or host only event counting.
> 
> For both VHE and non-VHE we switch the counters between host/guest at
> EL2. EL2 is filtered out by the PMU when we are using the :G modifier.
> 
> The PMU may be on when we change which counters are enabled however
> we avoid adding an isb as we instead rely on existing context
> synchronisation events: the isb in kvm_arm_vhe_guest_exit for VHE and
> the eret from the hvc in kvm_call_hyp.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---
>  arch/arm64/kvm/hyp/switch.c | 41 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 41 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index d496ef5..5e03921 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -373,6 +373,35 @@ static bool __hyp_text __hyp_switch_fpsimd(struct kvm_vcpu *vcpu)
>  	return true;
>  }
>  
> +static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt)
> +{
> +	u32 host = host_ctxt->events_host;
> +	u32 guest = host_ctxt->events_guest;
> +
> +	if (host == guest)
> +		return false;
> +
> +	if (host)
> +		write_sysreg(host, pmcntenclr_el0);

Nit:
Instead of clearing host counters we could clear non-guest counters:

	u32 clear_events = host_ctxt->events_hosts & ~guest;
		
	if (clear_events)
		write_sysreg(clear_events, pmcntenclr_el0);

Don't know if it makes a lot of difference.

> +
> +	if (guest)
> +		write_sysreg(guest, pmcntenset_el0);
> +
> +	return (host || guest);

We know this is true. If both (host == 0 && guest == 0) then (host ==
guest) meaning we returned false.


> +}
> +
> +static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
> +{
> +	u32 host = host_ctxt->events_host;
> +	u32 guest = host_ctxt->events_guest;
> +
> +	if (guest)
> +		write_sysreg(guest, pmcntenclr_el0);

Nit:
Same as above, we could just clear the counters that are exclusive to
the guest.

Cheers,
Andrew Murray - Dec. 5, 2018, 2:11 p.m.
On Tue, Dec 04, 2018 at 04:16:01PM +0000, Julien Thierry wrote:
> Hi Andrew,
> 
> On 04/12/18 14:58, Andrew Murray wrote:
> > Enable/disable event counters as appropriate when entering and exiting
> > the guest to enable support for guest or host only event counting.
> > 
> > For both VHE and non-VHE we switch the counters between host/guest at
> > EL2. EL2 is filtered out by the PMU when we are using the :G modifier.
> > 
> > The PMU may be on when we change which counters are enabled however
> > we avoid adding an isb as we instead rely on existing context
> > synchronisation events: the isb in kvm_arm_vhe_guest_exit for VHE and
> > the eret from the hvc in kvm_call_hyp.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  arch/arm64/kvm/hyp/switch.c | 41 +++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 41 insertions(+)
> > 
> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> > index d496ef5..5e03921 100644
> > --- a/arch/arm64/kvm/hyp/switch.c
> > +++ b/arch/arm64/kvm/hyp/switch.c
> > @@ -373,6 +373,35 @@ static bool __hyp_text __hyp_switch_fpsimd(struct kvm_vcpu *vcpu)
> >  	return true;
> >  }
> >  
> > +static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt)
> > +{
> > +	u32 host = host_ctxt->events_host;
> > +	u32 guest = host_ctxt->events_guest;
> > +
> > +	if (host == guest)
> > +		return false;
> > +
> > +	if (host)
> > +		write_sysreg(host, pmcntenclr_el0);
> 
> Nit:
> Instead of clearing host counters we could clear non-guest counters:
> 
> 	u32 clear_events = host_ctxt->events_hosts & ~guest;
> 		
> 	if (clear_events)
> 		write_sysreg(clear_events, pmcntenclr_el0);
> 
> Don't know if it makes a lot of difference.

In the case where an event is enabled for both host and guest, your
suggested change prevents us from unnecessarily clearing and then setting
the same bit. Thus is may prevent a call to write_sysreg.

> 
> > +
> > +	if (guest)
> > +		write_sysreg(guest, pmcntenset_el0);
> > +
> > +	return (host || guest);
> 
> We know this is true. If both (host == 0 && guest == 0) then (host ==
> guest) meaning we returned false.

Doh.

> 
> 
> > +}
> > +
> > +static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
> > +{
> > +	u32 host = host_ctxt->events_host;
> > +	u32 guest = host_ctxt->events_guest;
> > +
> > +	if (guest)
> > +		write_sysreg(guest, pmcntenclr_el0);
> 
> Nit:
> Same as above, we could just clear the counters that are exclusive to
> the guest.

Thanks,

Andrew Murray

> 
> Cheers,
> 
> -- 
> Julien Thierry

Patch

diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index d496ef5..5e03921 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -373,6 +373,35 @@  static bool __hyp_text __hyp_switch_fpsimd(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt)
+{
+	u32 host = host_ctxt->events_host;
+	u32 guest = host_ctxt->events_guest;
+
+	if (host == guest)
+		return false;
+
+	if (host)
+		write_sysreg(host, pmcntenclr_el0);
+
+	if (guest)
+		write_sysreg(guest, pmcntenset_el0);
+
+	return (host || guest);
+}
+
+static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt)
+{
+	u32 host = host_ctxt->events_host;
+	u32 guest = host_ctxt->events_guest;
+
+	if (guest)
+		write_sysreg(guest, pmcntenclr_el0);
+
+	if (host)
+		write_sysreg(host, pmcntenset_el0);
+}
+
 /*
  * Return true when we were able to fixup the guest exit and should return to
  * the guest, false when we should restore the host state and return to the
@@ -488,12 +517,15 @@  int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
+	bool pmu_switch_needed;
 	u64 exit_code;
 
 	host_ctxt = vcpu->arch.host_cpu_context;
 	host_ctxt->__hyp_running_vcpu = vcpu;
 	guest_ctxt = &vcpu->arch.ctxt;
 
+	pmu_switch_needed = __pmu_switch_to_guest(host_ctxt);
+
 	sysreg_save_host_state_vhe(host_ctxt);
 
 	__activate_traps(vcpu);
@@ -524,6 +556,9 @@  int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 
 	__debug_switch_to_host(vcpu);
 
+	if (pmu_switch_needed)
+		__pmu_switch_to_host(host_ctxt);
+
 	return exit_code;
 }
 
@@ -532,6 +567,7 @@  int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *host_ctxt;
 	struct kvm_cpu_context *guest_ctxt;
+	bool pmu_switch_needed;
 	u64 exit_code;
 
 	vcpu = kern_hyp_va(vcpu);
@@ -540,6 +576,8 @@  int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
 	host_ctxt->__hyp_running_vcpu = vcpu;
 	guest_ctxt = &vcpu->arch.ctxt;
 
+	pmu_switch_needed = __pmu_switch_to_guest(host_ctxt);
+
 	__sysreg_save_state_nvhe(host_ctxt);
 
 	__activate_traps(vcpu);
@@ -586,6 +624,9 @@  int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu)
 	 */
 	__debug_switch_to_host(vcpu);
 
+	if (pmu_switch_needed)
+		__pmu_switch_to_host(host_ctxt);
+
 	return exit_code;
 }