Fields§
§fids: BTreeMap<u32, Fid>
§proc: File
§cfg: Config
Implementations§
source§impl Server
impl Server
pub fn new<P: Into<Box<Path>>>(
root: P,
uid_map: ServerUidMap,
gid_map: ServerGidMap
) -> Result<Server>
pub fn with_config(cfg: Config) -> Result<Server>
pub fn keep_fds(&self) -> Vec<RawFd> ⓘ
pub fn handle_message<R: Read, W: Write>(
&mut self,
reader: &mut R,
writer: &mut W
) -> Result<()>
fn auth(&mut self, _auth: &Tauth) -> Result<Rauth>
fn attach(&mut self, attach: &Tattach) -> Result<Rattach>
fn version(&mut self, version: &Tversion) -> Result<Rversion>
fn flush(&mut self, _flush: &Tflush) -> Result<()>
fn walk(&mut self, walk: Twalk) -> Result<Rwalk>
fn read(&mut self, read: &Tread) -> Result<Rread>
fn write(&mut self, write: &Twrite) -> Result<Rwrite>
fn clunk(&mut self, clunk: &Tclunk) -> Result<()>
fn remove(&mut self, _remove: &Tremove) -> Result<()>
fn statfs(&mut self, statfs: &Tstatfs) -> Result<Rstatfs>
fn lopen(&mut self, lopen: &Tlopen) -> Result<Rlopen>
fn lcreate(&mut self, lcreate: Tlcreate) -> Result<Rlcreate>
fn symlink(&mut self, _symlink: &Tsymlink) -> Result<Rsymlink>
fn mknod(&mut self, _mknod: &Tmknod) -> Result<Rmknod>
fn rename(&mut self, _rename: &Trename) -> Result<()>
fn readlink(&mut self, readlink: &Treadlink) -> Result<Rreadlink>
fn get_attr(&mut self, get_attr: &Tgetattr) -> Result<Rgetattr>
fn set_attr(&mut self, set_attr: &Tsetattr) -> Result<()>
fn xattr_walk(&mut self, _xattr_walk: &Txattrwalk) -> Result<Rxattrwalk>
fn xattr_create(&mut self, _xattr_create: &Txattrcreate) -> Result<()>
fn readdir(&mut self, readdir: &Treaddir) -> Result<Rreaddir>
fn fsync(&mut self, fsync: &Tfsync) -> Result<()>
sourcefn lock(&mut self, lock: &Tlock) -> Result<Rlock>
fn lock(&mut self, lock: &Tlock) -> Result<Rlock>
Implement posix byte range locking code. Our implementation mirrors that of QEMU/9p - that is to say, we essentially punt on mirroring lock state between client/server and defer lock semantics to the VFS layer on the client side. Aside from fd existence check we always return success. QEMU reference: https://github.com/qemu/qemu/blob/754f756cc4c6d9d14b7230c62b5bb20f9d655888/hw/9pfs/9p.c#L3669
NOTE: this means that files locked on the client may be interefered with from either the server’s side, or from other clients (guests). This tracks with QEMU implementation, and will be obviated if crosvm decides to drop 9p in favor of virtio-fs. QEMU only allows for a single client, and we leave it to users of the crate to provide actual lock handling.
sourcefn get_lock(&mut self, get_lock: &Tgetlock) -> Result<Rgetlock>
fn get_lock(&mut self, get_lock: &Tgetlock) -> Result<Rgetlock>
Much like lock(), defer locking semantics to VFS and return success.