Struct cros_async::BlockingPool
source · pub struct BlockingPool {
inner: Arc<Inner>,
}
Expand description
A thread pool for running work that may block.
It is generally discouraged to do any blocking work inside an async function. However, this is
sometimes unavoidable when dealing with interfaces that don’t provide async variants. In this
case callers may use the BlockingPool
to run the blocking work on a different thread and
await
for its result to finish, which will prevent blocking the main thread of the
application.
Since the blocking work is sent to another thread, users should be careful when using the
BlockingPool
for latency-sensitive operations. Additionally, the BlockingPool
is intended to
be used for work that will eventually complete on its own. Users who want to spawn a thread
should just use thread::spawn
directly.
There is no way to cancel work once it has been picked up by one of the worker threads in the
BlockingPool
. Dropping or shutting down the pool will block up to a timeout (default 10
seconds) to wait for any active blocking work to finish. Any threads running tasks that have not
completed by that time will be detached.
§Examples
Spawn a task to run in the BlockingPool
and await on its result.
use cros_async::BlockingPool;
let pool = BlockingPool::default();
let res = pool.spawn(move || {
// Do some CPU-intensive or blocking work here.
42
}).await;
assert_eq!(res, 42);
Fields§
§inner: Arc<Inner>
Implementations§
source§impl BlockingPool
impl BlockingPool
sourcepub fn new(max_threads: usize, keepalive: Duration) -> BlockingPool
pub fn new(max_threads: usize, keepalive: Duration) -> BlockingPool
Create a new BlockingPool
.
The BlockingPool
will never spawn more than max_threads
threads to do work, regardless
of the number of tasks that are added to it. This value should be set relatively low (for
example, the number of CPUs on the machine) if the pool is intended to run CPU intensive
work or it should be set relatively high (128 or more) if the pool is intended to be used
for various IO operations that cannot be completed asynchronously. The default value is 256.
Worker threads are spawned on demand when new work is added to the pool and will
automatically exit after being idle for some time so there is no overhead for setting
max_threads
to a large value when there is little to no work assigned to the
BlockingPool
. keepalive
determines the idle duration after which the worker thread will
exit. The default value is 10 seconds.
sourcepub fn with_capacity(max_threads: usize, keepalive: Duration) -> BlockingPool
pub fn with_capacity(max_threads: usize, keepalive: Duration) -> BlockingPool
Like new but with pre-allocating capacity for up to max_threads
.
sourcepub fn spawn<F, R>(&self, f: F) -> impl Future<Output = R>
pub fn spawn<F, R>(&self, f: F) -> impl Future<Output = R>
Spawn a task to run in the BlockingPool
.
Callers may await
the returned Future
to be notified when the work is completed.
Dropping the future will not cancel the task.
§Panics
await
ing a Task
after dropping the BlockingPool
or calling BlockingPool::shutdown
will panic if the work was not completed before the pool was shut down.
sourcepub fn shutdown(
&self,
deadline: Option<Instant>
) -> Result<(), ShutdownTimedOut>
pub fn shutdown( &self, deadline: Option<Instant> ) -> Result<(), ShutdownTimedOut>
Shut down the BlockingPool
.
If deadline
is provided then this will block until either all worker threads exit or the
deadline is exceeded. If deadline
is not given then this will block indefinitely until all
worker threads exit. Any work that was added to the BlockingPool
but not yet picked up by
a worker thread will not complete and await
ing on the Task
for that work will panic.