KVM: do not assume PTE is writable after follow_pfn

commit bd2fae8da794b55bf2ac02632da3a151b10e664c upstream.

In order to convert an HVA to a PFN, KVM usually tries to use
the get_user_pages family of functinso.  This however is not
possible for VM_IO vmas; in that case, KVM instead uses follow_pfn.

In doing this however KVM loses the information on whether the
PFN is writable.  That is usually not a problem because the main
use of VM_IO vmas with KVM is for BARs in PCI device assignment,
however it is a bug.  To fix it, use follow_pte and check pte_write
while under the protection of the PTE lock.  The information can
be used to fail hva_to_pfn_remapped or passed back to the
caller via *writable.

Usage of follow_pfn was introduced in commit add6a0cd1c ("KVM: MMU: try to fix
up page faults before giving up", 2016-07-05); however, even older version
have the same issue, all the way back to commit 2e2e3738af ("KVM:
Handle vma regions with no backing page", 2008-07-20), as they also did
not check whether the PFN was writable.

Fixes: 2e2e3738af ("KVM: Handle vma regions with no backing page")
Reported-by: David Stevens <stevensd@google.com>
Cc: 3pvd@google.com
Cc: Jann Horn <jannh@google.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Paolo Bonzini 2021-02-01 05:12:11 -05:00 committed by Greg Kroah-Hartman
parent 6d9c9ec0d8
commit 83d42c2586

View File

@ -1889,9 +1889,11 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
kvm_pfn_t *p_pfn) kvm_pfn_t *p_pfn)
{ {
unsigned long pfn; unsigned long pfn;
pte_t *ptep;
spinlock_t *ptl;
int r; int r;
r = follow_pfn(vma, addr, &pfn); r = follow_pte(vma->vm_mm, addr, NULL, &ptep, NULL, &ptl);
if (r) { if (r) {
/* /*
* get_user_pages fails for VM_IO and VM_PFNMAP vmas and does * get_user_pages fails for VM_IO and VM_PFNMAP vmas and does
@ -1906,14 +1908,19 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
if (r) if (r)
return r; return r;
r = follow_pfn(vma, addr, &pfn); r = follow_pte(vma->vm_mm, addr, NULL, &ptep, NULL, &ptl);
if (r) if (r)
return r; return r;
}
if (write_fault && !pte_write(*ptep)) {
pfn = KVM_PFN_ERR_RO_FAULT;
goto out;
} }
if (writable) if (writable)
*writable = true; *writable = pte_write(*ptep);
pfn = pte_pfn(*ptep);
/* /*
* Get a reference here because callers of *hva_to_pfn* and * Get a reference here because callers of *hva_to_pfn* and
@ -1928,6 +1935,8 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
*/ */
kvm_get_pfn(pfn); kvm_get_pfn(pfn);
out:
pte_unmap_unlock(ptep, ptl);
*p_pfn = pfn; *p_pfn = pfn;
return 0; return 0;
} }