TL;DR
I've had an incident with SSE that's caused real client pain, I detail it below
Server Sent Events are a long established and rec...
For further actions, you may consider blocking this person and/or reporting abuse
Does the problem occur if the connection is using HTTPS? Most old proxies aren't able to do HTTPS decryption, and so cannot read the headers.
Also, instead of a custom long polling system, you could continue to use SSE. It's perfectly valid for the SSE server to close the connection after every push (as in long polling). By using this workaround you still benefit from the
EventSource
class. You can even go one step further and detect is the connection is using HTTP/1 or HTTP/2. If it is using HTTP/1, you can close the connection after every event for compatibility with old proxies, and continue to use a persistent connection with HTTP/2 (because AFAIK all modern proxies supporting HTTP/2 support SSE too).Thanks Mike for this great post and thanks Kévin for this very useful hint. Closing the connection helped me much in getting my application to work properly. The data is received now immediatly even when some old proxies are in between. The SSE connection is re-established automatically after the reconnection delay.
SSE is production ready!
scrumpoker.works/
Hello Kévin, having just found this article and comments a year after they were published, I am curious if you have experienced any issues with your proposed workarounds. In your opinion, is SSE production ready? I am at the point where I am ready to load test my app and thus I hope the answer is yes. ;-)
Hi @kenricashe. Yes, according to me SSE is totally suitable for production. I use it in prod for years on my own projects and I also manage a SaaS product built on top of SSE and the Mercure protocol, which have many customers and is serving a huge number of SSE connections every days without problems (mercure.rocks).
It's a very good point. We serve HTTP/2 (via AWS Cloud Front). Now what the client browser is getting I don't know - yet. I'll report back if I find any more information there.
Did you get the insights on this? For my application SSE seems a very good fit, but your story put me on guard.
If by forcing a HTTPS connection (which by now should be widely acceptable, if not flat out the default for normal everyday applications) this issue is mitigated, then that is valuable knowledge to add to the equation here..
Sjoerd82 in your experience, assuming you force HTTPS, has SSE been stable for you?
I would also like to know whether HTTPS solves the issue. Thank you.
Great write up! Does anyone use Server-Sent Events in their projects? If yes, for which use cases? This video dives into the main building blocks of Server-Sent Events in Go.
youtu.be/nvijc5J-JAQ
Interesting discovery! As soon as I reached the "20 minutes" paragraph I assumed it was a proxy or router along the way that killed the connection due to inactivity. I didn't expect the lack of Content-length to be the culprit. Love the comparison table.
I disagree a with the premise for the title, though. I get the impression that the application has been built to use client requests as a means to trigger SSE responses. SSE is great for pushing auxiliary real-time information, but the client should not receive responses based on its own actions through the SSE channel; for e.g. logging in, it should still receive a response as part of its regular RESTful request.
I think Server-Sent Events are production ready, but I think their usage should be limited to messages like "oh hey you have 2 new notifications" that leaves the client to decide whether to fetch the notifications instead of "Oh hey your Aunt May called and asked how the [...]". Much like getting an SMS letting you know that you've got a voice mail, but leaving you to decide whether to listen to it.
Hmmm. Well they wouldn't work well for notifications in your last point given the limitations. SSE are frequently described as being valid choice for something like a chat app - which clearly they aren't. I can find no documentation other than the spec that indicates otherwise. Many applications absolutely require back end initiated communication - anything which relays effectively. If that is not working, the entire principle of server sent events is broken to my mind.
I guess somewhere splattered all over articles about it I'd like to see: this won't work through some routers or networks. Hence I wrote this. It's not production ready for a whole series of use cases that are documented by others.
I should have expanded on my notification example, sorry about that. What I meant was that the client has the ability to send a request to
/voicemails
and receive a response as part of that request, making the SSE a helpful nicety. Progressive enhancement, if you will :)A chat app implies two-way communication though, so I think conceptually SSE is ill-suited for that purpose. If you choose to go that route - with what we've now learned in mind - I'd then implement some sort of handshake mechanism to test the connection and fall back on polling if the handshake does not complete in time.
As for corporate networks, I think it's safe to assume there will always be quirky setups that prohibit beyond-basic usage. My favorite pet peeve example of corporate stupidity are password policies that restrict lengths or confine you to latin alphanumerical characters.
Yes we see that all the time too on passwords.
That and the fact that 24% of my users are on IE11. Nice. It's the one thing as a developer of enterprise apps that always concerns me - caniuse.com uses browser stats for visitors - clearly not many devs are running IE11 on their development machine while browsing docs - so the stats always seem very low.
I know where you are coming from with your point on progressive enhancement and the use of SSE. My point is that the documentation says "it does this", many articles about it says "it does this". And then there is a paragraph that says this way down in the bowels of the thing:
"Where possible" is the killer :)
And then somewhere else entirely you can find a reference to the fact you can't actually disable chunking on a network you don't own.
Excellent write up. I shall reap your hard earned experience.
Yeah, sharing this type of difficult journey with the community has a positive impact (as evidenced by the comments). I've spent some time with websockets, and the summary of their warts is spot-on.
I'll absolutely think twice before considering SSE for anything other than lab work!
Thanks!
OMG Mike, what an excellent discovery! If the design requires sockets then it's most likely for speed (direct peer to peer), right?. I'm wondering what something like RabbitMQ would have done in this situation?
Thank you for this Long Polling tip.
Hey John, we are using Bull and moving to Rabbit on the back end and indeed that's what gives us the ability to easily rewind events on a reconnect. In this case I think it's either sockets - but I just hate the amount of code we have to write around socket.io, or the long polling - which is basically now working- had to do the reconnect stuff ourselves but that is easier given just how simple long polling is compared to sockets. Performance seems to be holding up, but this is a live situation haha! Not done full scalability on it yet...
I had considered ZMQ / scalable socket services for p2p telephony signaling, but it's bidirectional eventing is not all that strongly developed; mostly ZMQ is still about unidirectional messaging.
Huge thanks for this article. Just found it while researching SSE. Very helpful as I also support clients with industrial networks.
Curious if you could still use SSE with clients/networks that support it. Have the server send a canary message right away upon connecting. If the client does not receive the canary message within a few seconds of connecting, then the client knows SSE is not safe to use and switches to long polling.
Yes we still do that. We have a fall back to Long Polling with a pretty simple layer over it. If we don't get messages from SSE in response to an initial "ping" then we have a layer that basically flushes the stream every time (with a small debounce delay) and reopens it. We send a command to the server that says "treat" this stream as always close and reopen. Closing the stream does cause the proxies to forward on all of the data.
I came into this issue a lot when I was doing secure messaging applications last decade, and we looked at all these then, too. In the end we went with a dedicated websocket on a side channel reserved just for client event notifications (such as for a new message waiting) and used long polling to actually collect messages / as alternative for when the websocket died. This problem still needs a better solution, though some think it will magically appear with http/3.
What are the specifics of the "old" proxy? Were the headers commonly used for related scenarios used in the problematic scenario? Related reading: stackoverflow.com/questions/136727...
stackoverflow.com/questions/610290...
Great post. You mentioned running scalability tests. Could you give an example on what those look like?
So it basically means testing a "landing server" with an increasing number of connections until it breaks. Spinning up "dummy" clients that perform basic operations. This tests the end point robustness. We'd say X concurrent users per landing server is the minimum to pass a test and look to see if we have improved upon it by code changes.
Our architecture has landing servers which authenticate users and forward requests to queues. Queued jobs are picked up by nodes that can do singular things or lots of things. A kind of heterogenous grid. Landing servers need to listen for a relay events to the user on job completion.