1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
// Copyright 2022 The ChromiumOS Authors
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
//! Wraps VfioContainer for virtio-iommu implementation
use std::sync::Arc;
use anyhow::Context;
use base::AsRawDescriptor;
use base::AsRawDescriptors;
use base::Protection;
use base::RawDescriptor;
use sync::Mutex;
use vm_memory::GuestAddress;
use vm_memory::GuestMemory;
use crate::vfio::VfioError;
use crate::virtio::iommu::memory_mapper::AddMapResult;
use crate::virtio::iommu::memory_mapper::MappingInfo;
use crate::virtio::iommu::memory_mapper::MemoryMapper;
use crate::virtio::iommu::memory_mapper::RemoveMapResult;
use crate::VfioContainer;
pub struct VfioWrapper {
container: Arc<Mutex<VfioContainer>>,
// ID of the VFIO group which constitutes the container. Note that we rely on
// the fact that no container contains multiple groups.
id: u32,
mem: GuestMemory,
}
impl VfioWrapper {
pub fn new(container: Arc<Mutex<VfioContainer>>, mem: GuestMemory) -> Self {
let c = container.lock();
let groups = c.group_ids();
// NOTE: vfio_get_container ensures each group gets its own container.
assert!(groups.len() == 1);
let id = *groups[0];
drop(c);
Self { container, id, mem }
}
pub fn new_with_id(container: VfioContainer, id: u32, mem: GuestMemory) -> Self {
Self {
container: Arc::new(Mutex::new(container)),
id,
mem,
}
}
pub fn clone_as_raw_descriptor(&self) -> Result<RawDescriptor, VfioError> {
self.container.lock().clone_as_raw_descriptor()
}
unsafe fn do_map(&self, map: MappingInfo) -> anyhow::Result<AddMapResult> {
let res = self.container.lock().vfio_dma_map(
map.iova,
map.size,
map.gpa.offset(),
map.prot.allows(&Protection::write()),
);
if let Err(VfioError::IommuDmaMap(err)) = res {
if err.errno() == libc::EEXIST {
// A mapping already exists in the requested range,
return Ok(AddMapResult::OverlapFailure);
}
}
res.context("vfio mapping error").map(|_| AddMapResult::Ok)
}
}
impl MemoryMapper for VfioWrapper {
fn add_map(&mut self, mut map: MappingInfo) -> anyhow::Result<AddMapResult> {
map.gpa = GuestAddress(
self.mem
.get_host_address_range(map.gpa, map.size as usize)
.context("failed to find host address")? as u64,
);
// SAFETY:
// Safe because both guest and host address are guaranteed by
// get_host_address_range() to be valid.
unsafe { self.do_map(map) }
}
unsafe fn vfio_dma_map(
&mut self,
iova: u64,
hva: u64,
size: u64,
prot: Protection,
) -> anyhow::Result<AddMapResult> {
self.do_map(MappingInfo {
iova,
gpa: GuestAddress(hva),
size,
prot,
})
}
fn remove_map(&mut self, iova_start: u64, size: u64) -> anyhow::Result<RemoveMapResult> {
iova_start.checked_add(size).context("iova overflow")?;
self.container
.lock()
.vfio_dma_unmap(iova_start, size)
.context("vfio unmapping error")
.map(|_| RemoveMapResult::Success(None))
}
fn get_mask(&self) -> anyhow::Result<u64> {
self.container
.lock()
.vfio_get_iommu_page_size_mask()
.context("vfio get mask error")
}
fn supports_detach(&self) -> bool {
// A few reasons why we don't support detach:
//
// 1. Seems it's not possible to dynamically attach and detach a IOMMU domain if the virtio
// IOMMU device is running on top of VFIO
// 2. Even if VIRTIO_IOMMU_T_DETACH is implemented in front-end driver, it could violate the
// following virtio IOMMU spec: Detach an endpoint from a domain. when this request
// completes, the endpoint cannot access any mapping from that domain anymore.
//
// This is because VFIO doesn't support detaching a single device. When the virtio-iommu
// device receives a VIRTIO_IOMMU_T_DETACH request, it can either to:
// - detach a group: any other endpoints in the group lose access to the domain.
// - do not detach the group at all: this breaks the above mentioned spec.
false
}
fn id(&self) -> u32 {
self.id
}
}
impl AsRawDescriptors for VfioWrapper {
fn as_raw_descriptors(&self) -> Vec<RawDescriptor> {
vec![self.container.lock().as_raw_descriptor()]
}
}