re: Long polling was a hack because Websockets were not ready, I do not see a reason why would you not keep a connection open for all time and implemen...

When I did the logic for keeping the connection open, I also had to have a queue in case of activity spikes. Then I had to monitor whether that queue got too full. Then I had to take action on the connection when it did. Then I also had to handle connection failure notices. Which made the connection a shared, stateful resource between the two competing concerns. Which opens up another can of concurrency worms. E.g. The connection closed, but it was because the queue monitor closed it, so don't reconnect.

Whereas if I make a request/reply of specific duration, most of the connection-handling code goes away. I don't have to disconnect for being overwhelmed because the whole thing shuts itself down after a set duration anyway. And I can just wait until I'm done processing the spike before making the next request. Connection failures in this case tend to throw exceptions instead of being handled as events. So, the remediation to that is easy: catch, backoff, and retry.

I should also note that this was for processing event streams. So a) it is totally possible to receive more messages than the code can process and b) I can request the server to send me messages from where I left off. So nothing is lost from being disconnected for a time.

Certainly websockets are better for a lot of common scenarios. Long polling started as a hack because request/reply streams were not meant for server pushes. But the hack actually turns out to have discovered a nice model for pull-based streaming of live data. Now we just need a more efficient implementation.

Ok that sounds too strange to ask, but pulling can be done using any of the methods I mentioned, including websockets.

Can be done, but not simply. For example, if you consider sending a pull-based time-boxed request for data (similar to long poll) across web socket. Since web socket is bidirectional async, the response is in no way connected to the request, could arrive at any time (or never) and in another thread. So you have to setup a state machine to track the state of each request, timeouts to give up on a response. And there is a potential for a lot of byzantine problems like out-of-order responses due to GC pauses, lost packets + network buffers, and whatnot. Not to mention dealing separately with connection interruption events. A lot of these problems are already handled by using a regular HTTP request. A good library on top of websockets could abstract this away from you like HTTP libs do for TCP. Let me know if you know of any good ones.

Oh, I think I see what you are saying. Create a web socket connection and automatically close it after a certain period of time, then reconnect and go again when the client is done processing. It would be an interesting scenario to test. Standard HTTP requests by now have mitigations for short-lived connections, but I'm not sure if they apply to web sockets. Need to test to be sure. Thanks for bearing with me. :)

I think you have in mind a specific usage that is not so popular and have rather complex requirements.

You mentioned threads, and requests and out-of-order responses, and streams, I can't think at a scenario involving all these.

You usually only need to refresh some non-real time data and make requests, you do not send a new request until you received a response from the last one.

Or have a stream of data and it can keep coming, usually you do not process them on the front end so you do not end up having a lag, and more you do not have different requests. Also JS is single threaded. No need for a request, you have a connection open so you need the latest data, otherwise it would not require a websocket.

Yeah, the experience where I wished I had a long-poll capability was on the back-end processing events. Maintaining an open connection requires a surprising amount of coding to handle failure cases. Whereas a long poll can give you realtime-stream-like behavior in normal cases and a nice protective fallback behavior in spiky cases without worrying too much about the underlying connection. I think it is probably abnormal for web-socket users to run into throughput constrained situations where they need to protectively disconnect. It's not our normal usage pattern either, but it was a possibility I had to code for.

code of conduct - report abuse