-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
equivalent to async #26
Comments
This is two questions. The first is the equivalent to In q, you can either schedule a task on an existing threadpool (by associating a scheduler+queue to it), or run a function in a newly created thread using The second question is how to block-read a promise value. There is no auto ec = q::make_execution_context< q::blocking_dispatcher >( "blocking-read" );
int n;
prom
// copy return value from prom to outer scope
.then( [ &n ]( int val ){ n = val; } )
// stop the blocking dispatcher
.then(
[ ec ]( ){ ec->dispatcher( )->terminate( q::termination::linger ); },
ec->queue( )
);
ec->dispatcher( )->start( ); // This will block until it's terminated (above)
n; // is now 42 Do note that if the thread function throws an exception, the returned promise ( |
My use-case for using the library was for implementing a multi-producer single-consumer system. Having a block-read is one of the requirements. Is there another way of enqueuing tasks/continuations into a thread, blocking, and then extracting the return data from the promise? |
Why would blocking read be a requirement? Sequential handling of completions I can understand, but nothing ever really needs to block. If it's performance critical and you don't want to fire up threads for a single task,
You put tasks on a queue which is bound to a thread pool - and allow them to execute as soon as possible. // Make blocking "main" execution context
auto ec_main = q::make_execution_context< q::blocking_dispatcher >( "main" );
// Make an execution context for a threadpool with 4 threads
// This context's _completion_ is scheduled on the blocking dispatcher's queue...
auto ec_pool = q::make_execution_context< q::threadpool, q::direct_scheduler >( "main background pool", ec_main->queue( ), 4 );
// This is a function which should run in the threadpool:
int some_function( ); // defined somewhere else
// Add task "fn" to the threadpool:
auto promise = q::make_promise( ec_pool->queue( ), some_function );
promise.then(
[ ]( int value ){ /* do something */ },
ec_main->queue( ) // Will run on the blocking thread
);
ec->dispatcher( )->start( ); // Will block However, if you're going to continuously read results from background tasks and handle the results, consider using a // Create a channel with the blocking dispatchers' queue as default.
// Backlog of 5, can be ignored unless you need to handle upstream pressure
q::channel< int > ch( ec_main->queue( ), 5 );
auto readable = ch.get_readable( );
auto writable = ch.get_writable( );
const completion = readable->consume(
// Will be called for each completed job and run sequentially because
// this function is called on the blocking dispatcher's "main" thread
[ ]( int value ){ /* handle value */ }
); // When all tasks are done, close the writable:
writable.close( ); // The result of consume() is a promise which will be resolved when the writable is closed
completion.then( [ ]( ){ std::cout << "Done" << std::endl; } ); Each background task can then capture |
What is the LibQ equivalent to std::async? More specifically:
std::future f = std::async([]{
// ...
return 42;
}));
int i = f.get();
when the function blocks until f.get() returns with a value?
The text was updated successfully, but these errors were encountered: