Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better spec for how close() / cancel() and the closing promise should act #15

Open
etodanik opened this issue Aug 29, 2022 · 5 comments

Comments

@etodanik
Copy link

etodanik commented Aug 29, 2022

How is close() expected to behave if the user stopped reading the ReadableStream but there are a few messages in the queue up for grabs?

If you try to implement this spec with a simple ReadableStream you could have some unread messages that the stream didn't pull from the underlying because the user hasn't requested them via read(), the stream won't actually get to reading the closing handshake - because the end user still hans't pulled the latest message available in the ReadableStream.

For example, the current Deno implementation (which is rightfully hidden behind an --unstable flag) is broken because if you have more than 1 incoming message in the queue, and you're not intending to read it before or after calling close(), the closing promise will essentially remain unresolved. Here is an example of this behavior:
denoland/deno#15616

I think the spec needs to be clearer about the mechanics of how to do this, to prevent parties that implement this functionality from running into the same issue and getting too divergent on how they solve this problem.

For example:

  1. You could say it's the responsibility of the user to explicitly fast forward the reader() until you get the close frame message.
  2. You could fill up the controller queue "for the user" even if he won't read it (I guess it's kind of my preferred way?)
  3. How do you react if the server you're interfacing with is not acting to-spec? The RFC spec says "wait a reasonable amount of time and then disconnect (the RFC 6455 section on close frames)". What is this reasonable amount of time? Should it be configurable in WebSocketStreamOptions?

I think this part is trickier than it seems and it can benefit from a higher resolution description on the spec itself so that everyone implements it in a way that would behave the same. Right now it leaves too much room to the imagination.

@ricea @tomayac

@tomayac
Copy link
Contributor

tomayac commented Aug 29, 2022

@ricea is the spec owner. From my DevRel point of view, I agree with you that 2. looks like the way that most developers would expect.

@ricea
Copy link
Owner

ricea commented Aug 30, 2022

Yes, this is mildly broken in Chrome too. If the WebSocket is blocked due to backpressure then we'll never read the Close frame.

Forcing the user to read from the ReadableStream is effectively what we do, but it goes against the principle of the close() method that it should close the socket no matter what.

Enqueuing unread messages when close() is called, disregarding backpressure, seems generally like a good solution, but the edge case where the server just keeps sending more data forever worries me.

For point 3., are you referring to the

   server MUST close the underlying TCP
   connection immediately; the client SHOULD wait for the server to
   close the connection but MAY close the connection at any time after
   sending and receiving a Close message, e.g., if it has not received a
   TCP Close from the server in a reasonable time period.

section from RFC6455? Chrome uses a timeout of 2 seconds before closing the connection itself (many servers seem to get this wrong). If we make it configurable I worry that everyone will just set it to zero, defeating the purpose of avoiding the client getting stuck in TIME_WAIT state.

@etodanik
Copy link
Author

etodanik commented Aug 30, 2022

Yes, from what I can see enqueuing unread messages is the best compromise. This is exactly what I wrote in the PR I contributed at Deno to fix this behaviour, and it fixed my use case nicely.

I see the worry about everyone setting it to ZERO. Two seconds seems like a nice time. 5 seconds is the maximum time for most timeouts recommended in the RFC.

Can we get away with setting something like Math.min(2000, options.closeTimeout) on the option? Basically meaning that it can only be customised UPWARDS only and not downwards (in case someone wants to properly close things with a server that has the behaviour of sending tons of data before the close frame and may take a while)?

This would solve the worry about the server just sending unlimited data while generally working well with RFC compliant servers.

@etodanik
Copy link
Author

etodanik commented Aug 9, 2023

@ricea Just pinging here, since we're using this in production with Deno's version of WebSocketStream, highly interested in seeing this resolved.

@ricea
Copy link
Owner

ricea commented Aug 10, 2023

Sorry. I haven't had time to work on this. If you happen to have time to draft a PR to specify the behaviour, that would be a big help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants