Saving SignalR connection state with scaleout servers - c#

Using SignalR 2.2.0, I would like some validation of observations concerning clients executing hub methods in a multi-server scale out setup.
I have SignalR running in a multi server model, using the SQL Server scaleout message bus.
When a client connects, the OnConnected method of the Hub is called, as expected. I save the Context.ConnectionId in static dictionary.
When the client later calls a method of the hub, it seems that another server in the farm is executing the hub method and not the server that originally ran the OnConnected method. The Context.ConnectionId value in the hub method is correct, but it does not exist in the dictionary.
Is this the expected behaviour in the scale out model? If so, then I assume that I should be saving connection state data in database so that all Hub on all the servers will be able to look up the connection state based on the ConnectionId.

Is this the expected behaviour in the scale out model? If so, then I
assume that I should be saving connection state data in database so
that all Hub on all the servers will be able to look up the connection
state based on the ConnectionId.
Yes this is expected behaviour, you should use shared resource like database or cache. But connectionIds will be not enough by itself. Because a client will get different connectionIds on different servers or on refreshes. So you should map connectionId and client. When a disconnect occurs find client with connectionId and check this client has another connectionId.
In here I have answered more detail based on question.

Related

WCF : How to handle a connection loss with the client?

I am currently developing an IoT that performs HTTP requests to my WCF web server via the GPRS service (GSM Network).
The thing is : I very often (due to the bad GPRS RSSI on my connected object) lose the connection to the GPRS service. The problem is that sometimes, when I perform an HTTP GET on my server, it generates a timeout on the object's side (HTTP code 408), while the server actually received the request : it means that I had connection when I queried the server, but I lost it right after.
However, the server isn't aware of the fact that my object lost the connection, so it will do what it was told to do anyway (delete stuff in the database etc.).
I need very precise synchronization between the object and the server, and I don't want the server to perform database changes if my object loses the connection and doesn't receive the server's response. Which is why I would like to know it this is possible, with WCF, to know if at the end of the API function Call, the server successfully responded to the query or not (have some sort of ACK to be sure that the HTTP communication fully worked on both sides).
Thanks for your help.

Dispose one existing data server connection for all SignalR hubs

I have a data server pushing data through my .NET server to clients using ?SignalR. Because a SignalR Hub instance is created per request, but I want only one connection / subscription to the data server, I'm using a singleton controller to manage and map incoming data to the corresponding client.
My problem now is: when and how can I dispose this one connection on application termination (even forced termination)?
I'm quite new to handling connections in .NET so this might be really easy. Maybe there's a better application design as using the singleton controller.

SignalR scaleout with Azure for chat scenario

For a chat application, I use Azure architecture with SignalR, with the web-role acting as SignalR server (the messages are not broadcast type but are intended for specific user/client).
I want to scale out SignalR server along with the web-roles, to handle heavy user load. Although, SignalR documentation doesn't recommend to use the pre-baked SignalR scale out methods using backplane (Redis, Service bus) for such cases when the number of messages increase as more users are connected (or in user-event driven scenario). It explicitly states: "Client-to-client (e.g., chat): In this scenario, the backplane might be a bottleneck if the number of messages scales with the number of clients; that is, if the rate of messages grows proportionally as more clients join."
Question:
Does anyone know of any custom scale-out solution for such high-frequency case, which doesn't push messages to each server instance or some other scale-out solution?
Already looked everywhere in SignalR documentation and the related videos but couldn't find anything, other than a word "filtered-bus", which was not explained what it is and how it should be used.
I figured it out myself: Basic idea is server affinity/sticky sessions.
Each instance of web-role acts as a stand-alone SignalR server. At the first connection time, I let the Azure load balancer choose any instance of web-role and save the IP address of that web-role instance with the client identifier in a map. If there is another connect request coming from the same client (e.g after page refresh) then I check the IP address of the current role instance and if it matches the entry in map then I let it proceed otherwise I disconnect the client and connects it to the correct instance of web-role.
Each instance of worker-role also acts as SignalR .net client and connects to all the available SignalR servers (all instances of web-role). Before sending a message to the SignalR server (web-role), I look up in the map to determine the correct SignalR server instance (depending on the intended JS recipient).
Benefits:
There is no need of back-plane technology (and hence no delays in message delivery).
Each web-role instance care about the client connected to it and each message doesn't have to be duplicated on every SignalR server. Hence it can scale pretty good.
Easy to implement.

Network tcp socket application retry method

I'm writing a windows based client(c++) and server(c#) application which will communicate to each other via tcp packets. Here the client is sending data and server needs to acknowledge the same.
Now for this purpose I have made one single 'socket()' and 'connect()' call during the client lifetime on its startup. Some error checking and retries has been kept inside 'send()' and 'recv()' calling methods. Do note that one client will send one set (multiple packets) of data and quit at a time.
Now my questions are:
If the server is running continuously(e.g. windows service) on some PC, do I really need to consider about connection
breakdown(network failure) and creating a new socket and connect
accordingly from client?
If that be so, shall I need to consider resending the data from starting or from the point where client has failed to communicate last
time?
I want to know the general methods what people are using around the world for dealing this kind of situations for network applications.
do I really need to consider about connection breakdown and creating a new socket and connect accordingly from client?
Depends on how precious your data is. If you want to make sure it ended up at the server, and an error occurred while sending, then you can consider it "not sent".
If that be so, shall I need to consider resending the data from starting or from the point where client has failed to communicate last time?
That depends entirely on how your application logic and application protocol work. From your description we can't know how you send your data and how a server would recognize data it has already seen.
do I really need to consider about connection breakdown(network
failure) and creating a new socket and connect accordingly from
client?
You do certainly not need to create a new socket after connection shutdown; you can use the existing socket to connect anew.

Persistent connection with queries from server to client

I have a company network under my control and a couple of closed customer networks. I want to communicate from a web application in my network to a database inside a customer network. My first idea was:
Web application stores query in a database in the company network and waits for answer.
Windows service inside client network polls our database a couple of times every second through a (WCF) web service also in our company network.
If a query is available the Windows service executes it in it's local database and stores the answer in the company database.
I've been thinking about removing the polling idea and instead using persistent connection between a client in the customer network and a server in our company network. The client initiates the connection and then waits for queries from the server. What would be better or worse compared to polling through a web service? Would WCF be the right thing to use here?
you have few approaches:
WCF Duplex, Once the web application stores a query in database, you initiate a call to the client (in this case the Windows Service) instead of making the windows service polls every few seconds. net.tcp will be good choice but still you can use http.
Long polling, Instead of letting your Windows Service client sends a request every few seconds, let it send the request, the channel is open, set the timeout in both client and WCF service for longer time, let the server method loops and checks the database for new notifications. Once new notifications found, the method returns with them. At Client side, once you get a return send another request to the server and then process the data. If timeOut error occure, send another request. Just Google Long polling you will find a lot.
Regarding querying the database every few seconds, the better approach would be making a table for notifications, So instead of querying a large table with a complex sql string every few seconds you can let the client add the notifications in a separate table (after they are done adding them the main table), so your query will be much simpler and takes less resources. you can add direct pointers (Like Ids) in the notifications table to save time. Later clean up the notifications table..

Categories