Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add *Async overloads in batching mechanism #319

Open
tiago18c opened this issue Feb 8, 2022 · 4 comments
Open

Add *Async overloads in batching mechanism #319

tiago18c opened this issue Feb 8, 2022 · 4 comments
Assignees
Labels
enhancement New feature or request

Comments

@tiago18c
Copy link
Member

tiago18c commented Feb 8, 2022

Lack of async methods in batching requests make it unuseable with avalonia and blazor wasm

@tiago18c tiago18c added the enhancement New feature or request label Feb 8, 2022
@liquizard
Copy link
Contributor

liquizard commented Feb 8, 2022

Is avalonia the UI framework used for citadel?
This issue cropped up during the original development of batching.
A minimal skeleton project that demonstrates the problem for avalonia and blazor ws would be very helpful.

@tiago18c
Copy link
Member Author

tiago18c commented Feb 8, 2022

There's an easy workaround. But the gist of it is that when binding actions (methods) to the UI, if the method doesn't return quickly, the UI dispatcher locks up. However, if the method bound is async (along with all inner IO-bound calls) the dispatcher manages to switch between drawing and execution to avoid freezing the app.

The quick solution is to wrap anything in an async lambda, e.g.: await Task.Run( () => /* long running task*/);.

However, for the Blazor WASM case, any .Result property call in a task will throw an exception due to some inner Monitor details. See this.

@liquizard
Copy link
Contributor

liquizard commented Feb 9, 2022

I will implement an ExecuteAsync method but there is a concurrency hazard we need to defend against that may bring with it some internal changes.

At the moment, the internal Composer is reused between calls. When Execute happens syncronously, we are certain that new commands are not going to be immediately added to the batch that is currently executing. Once execution has completed and the callbacks invoked, the batch is reset ready to receive more commands. If ExecuteAsync returns immediately, callers may start throwing more requests into the batch... with auto-execute, this should not be allowed to continue indefinitely and at some point it would be a good idea to block the caller otherwise you could potentially have hundreds of large batches "in flight"

There are a couple of options on how we can implement this:

  1. make all this the callers problem, e.g. an attempt to add to a batch that is currently executing would throw a GTFO exception
  2. handle the complexity internally, e.g. execute will store the executing batch internally and create a new one to receive new commands from the caller. A call to Flush would execute if there is not a current executing batch, otherwise it would be queued.

Any thoughts or preferences?

@tiago18c
Copy link
Member Author

tiago18c commented Feb 9, 2022

There is already a ExecuteAsync. The problem is that everything above it has no *Async version. Wouldn't that be enough?

1- FlushAsync that calls ExecuteAsync and awaits
2- AddAsync that conditionally calls FlushAsync and awaits
3- AddRequestAsync that calls AddAsync and awaits
4- Overloads every RPC method with an *Async version

As for the concurrency issues, I think we could solve it with a semaphore that protects the _reqs collection (in Add, Clear, .Count). I think this should be enough, and would make the batcher thread safe (might require some other small changes).

I can work a prototype of this later today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants