Load balancers
When you deploy your app to production at some point you'll want to scale out. Scaling out means running the app on multiple servers. When the app runs in the cloud scaling out is a matter of setting the number of servers you want to run. A mechanism called a load balancer will then pick a server on each incoming request. The load balancer can pick a different server in sequence or have some other logic going on to pick one.
WebSockets
When using web sockets there is no problem. Once the web socket is established it is like a tunnel between one server and the browser. But when using polling or long polling there might be a problem. Each message is a different request and each time a request is made it can turn up at a different server. A server that might have no knowledge about the messages that were sent earlier to the client and about the context of the message.
Let's say your app is scaled out to different servers. Server 1 is getting the request to prepare order 1 and it starts processing it. When a polling request comes in the load balancer assigns the request to a different server. That server doesn't know about order 1.
With server sent events the same problem can occur because the http connection could get dropped. The connection will then immediately be restored by the EventSource in the browser.
Sticky Sessions
We can solve this problem by using sticky sessions. There are several implementations of this but most of the time it works as follows. As part of the response of the first request the load balancer sets a cookie in the browser indicating the server that was used. On subsequent requests the load balancer then reads the cookie and assigns the request to the same server.
The IIS and Azure web apps version of sticky sessions is called Application Request Routing Affinity or ARR Affinity. Since SignalR could use non-websocket transports you should turn this on on all servers where your application is on. When using an on-premise server with IIS install the ARR Affinity module.
And while you're at it, make sure WebSockets is also turned on. Else your application will use Server Sent Events at best.
Syncing Clients Between Instances
But there's another problem. Let's say a user is working on a web document using Office 365 and she invites others to join her. The other might end up at another server. Now when the user on server 1 changes the document a message has to be sent to the others. But server 1 doesn't know about users that are connected to hubs in other servers.
To solve this the servers need a way to share data. This can be done with a database but a faster alternative would be to use a redis cache. SignalR supports redis out of the box as we will see later. It in fact uses the built-in pub/sub functionality in Redis to synchronize client information across the different servers.
Solving this is so easy it doesn't even need a screen shot. Install the NuGet package Microsoft.AspNetCore.SignalR.Redis. Now in the configure services method in the startup class type AddRedis behind AddSignalR with as a parameter the redis connection string. That's it.
You can install Redis yourself or use the Azure Redis service.
Apart from Redis there is also community build support for other data stores, but the beauty of Redis is that it doesn't even store the data.
Azure SignalR Service
Sticky sessions, Redis cache and we didn't even talk about connection limits. Each client will have a HTTP connection limit of about 6 connections at the same time. When SignalR uses Long Polling or Server Sent Events that limit is quickly reached.
Even WebSockets a limit of about 50 connections.
If you don't like managing all of this stuff there's a turn-key solution: The Azure SignalR Service. It works like this:
All client connections are offloaded to the Azure SignalR Service. So they're not really connected to the server your app runs on anymore.
When a new client comes in or you send a message to all clients or groups or individual clients you application maintains a tunnel connection through which it communicates with the SignalR service. So all the code of your application including the hub is just running on the server.
You pay for a bundle of simultaneously supported connections. So the worrying about sticky sessions, redis cache and connection limits goes away.
Incorporating Azure SignalR service is about as easy at adding the Redis cache. Since it is in preview at the time of writing please see here for instructions.
If you want all the ins and outs of ASP.NET Core SignalR take a look at my Pluralsight course "Getting Started With ASP.NET Core SignalR".
Happy SignalRing!