KVM: Disable preemption in kvm_get_running_vcpu()

Accessing a per-cpu variable only makes sense when preemption is
disabled (and the kernel does check this when the right debug options
are switched on).

For kvm_get_running_vcpu(), it is fine to return the value after
re-enabling preemption, as the preempt notifiers will make sure that
this is kept consistent across task migration (the comment above the
function hints at it, but lacks the crucial preemption management).

While we're at it, move the comment from the ARM code, which explains
why the whole thing works.

Fixes: 7495e22bb1 ("KVM: Move running VCPU from ARM to common code").
Cc: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Tested-by: Zenghui Yu <yuzenghui@huawei.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/318984f6-bc36-33a3-abc6-bf2295974b06@huawei.com
Message-id: <20200207163410.31276-1-maz@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This commit is contained in:
Marc Zyngier 2020-02-07 16:34:10 +00:00 committed by Paolo Bonzini
parent bab0c318ba
commit 1f03b2bcd0
2 changed files with 13 additions and 15 deletions

View File

@ -179,18 +179,6 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
return value; return value;
} }
/*
* This function will return the VCPU that performed the MMIO access and
* trapped from within the VM, and will return NULL if this is a userspace
* access.
*
* We can disable preemption locally around accessing the per-CPU variable,
* and use the resolved vcpu pointer after enabling preemption again, because
* even if the current thread is migrated to another CPU, reading the per-CPU
* value later will give us the same value as we update the per-CPU variable
* in the preempt notifier handlers.
*/
/* Must be called with irq->irq_lock held */ /* Must be called with irq->irq_lock held */
static void vgic_hw_irq_spending(struct kvm_vcpu *vcpu, struct vgic_irq *irq, static void vgic_hw_irq_spending(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
bool is_uaccess) bool is_uaccess)

View File

@ -4409,12 +4409,22 @@ static void kvm_sched_out(struct preempt_notifier *pn,
/** /**
* kvm_get_running_vcpu - get the vcpu running on the current CPU. * kvm_get_running_vcpu - get the vcpu running on the current CPU.
* Thanks to preempt notifiers, this can also be called from *
* preemptible context. * We can disable preemption locally around accessing the per-CPU variable,
* and use the resolved vcpu pointer after enabling preemption again,
* because even if the current thread is migrated to another CPU, reading
* the per-CPU value later will give us the same value as we update the
* per-CPU variable in the preempt notifier handlers.
*/ */
struct kvm_vcpu *kvm_get_running_vcpu(void) struct kvm_vcpu *kvm_get_running_vcpu(void)
{ {
return __this_cpu_read(kvm_running_vcpu); struct kvm_vcpu *vcpu;
preempt_disable();
vcpu = __this_cpu_read(kvm_running_vcpu);
preempt_enable();
return vcpu;
} }
/** /**