I have a data server pushing data through my .NET server to clients using ?SignalR. Because a SignalR Hub instance is created per request, but I want only one connection / subscription to the data server, I'm using a singleton controller to manage and map incoming data to the corresponding client.
My problem now is: when and how can I dispose this one connection on application termination (even forced termination)?
I'm quite new to handling connections in .NET so this might be really easy. Maybe there's a better application design as using the singleton controller.
Related
I am using SignalR to setup a connection between my client and my server. I would like to store some user data on init. When the user calls a method I want access to these variables to do some calculations and send a response back to the client.
I can not use static variables because I want these variables to be individual for each client. Saving these variables in one global dictionary seems not performant for a lot of users. Saving the data in a database is not an option because the client will call a method approximately every 15-30 seconds and this for a few minutes and then the hub can be disposed.
What I am trying to archieve is one hub instance per client. One open connection with the server, 1-on-1. Is this possible with SignalR and how or do I have to look for another library?
Thanks a lot,
Have a great day!
by trying to get a hub instance per client connection is pretty hard since hubs are designed to be just the opposite (a way to talk to some or all clients connected)
you are probably looking for the Persistent Connection API
PS: I don't really see why you rule out a db so quickly since you could always use an in memory cache like redis.
Using SignalR 2.2.0, I would like some validation of observations concerning clients executing hub methods in a multi-server scale out setup.
I have SignalR running in a multi server model, using the SQL Server scaleout message bus.
When a client connects, the OnConnected method of the Hub is called, as expected. I save the Context.ConnectionId in static dictionary.
When the client later calls a method of the hub, it seems that another server in the farm is executing the hub method and not the server that originally ran the OnConnected method. The Context.ConnectionId value in the hub method is correct, but it does not exist in the dictionary.
Is this the expected behaviour in the scale out model? If so, then I assume that I should be saving connection state data in database so that all Hub on all the servers will be able to look up the connection state based on the ConnectionId.
Is this the expected behaviour in the scale out model? If so, then I
assume that I should be saving connection state data in database so
that all Hub on all the servers will be able to look up the connection
state based on the ConnectionId.
Yes this is expected behaviour, you should use shared resource like database or cache. But connectionIds will be not enough by itself. Because a client will get different connectionIds on different servers or on refreshes. So you should map connectionId and client. When a disconnect occurs find client with connectionId and check this client has another connectionId.
In here I have answered more detail based on question.
For a chat application, I use Azure architecture with SignalR, with the web-role acting as SignalR server (the messages are not broadcast type but are intended for specific user/client).
I want to scale out SignalR server along with the web-roles, to handle heavy user load. Although, SignalR documentation doesn't recommend to use the pre-baked SignalR scale out methods using backplane (Redis, Service bus) for such cases when the number of messages increase as more users are connected (or in user-event driven scenario). It explicitly states: "Client-to-client (e.g., chat): In this scenario, the backplane might be a bottleneck if the number of messages scales with the number of clients; that is, if the rate of messages grows proportionally as more clients join."
Question:
Does anyone know of any custom scale-out solution for such high-frequency case, which doesn't push messages to each server instance or some other scale-out solution?
Already looked everywhere in SignalR documentation and the related videos but couldn't find anything, other than a word "filtered-bus", which was not explained what it is and how it should be used.
I figured it out myself: Basic idea is server affinity/sticky sessions.
Each instance of web-role acts as a stand-alone SignalR server. At the first connection time, I let the Azure load balancer choose any instance of web-role and save the IP address of that web-role instance with the client identifier in a map. If there is another connect request coming from the same client (e.g after page refresh) then I check the IP address of the current role instance and if it matches the entry in map then I let it proceed otherwise I disconnect the client and connects it to the correct instance of web-role.
Each instance of worker-role also acts as SignalR .net client and connects to all the available SignalR servers (all instances of web-role). Before sending a message to the SignalR server (web-role), I look up in the map to determine the correct SignalR server instance (depending on the intended JS recipient).
Benefits:
There is no need of back-plane technology (and hence no delays in message delivery).
Each web-role instance care about the client connected to it and each message doesn't have to be duplicated on every SignalR server. Hence it can scale pretty good.
Easy to implement.
I'm writing a windows based client(c++) and server(c#) application which will communicate to each other via tcp packets. Here the client is sending data and server needs to acknowledge the same.
Now for this purpose I have made one single 'socket()' and 'connect()' call during the client lifetime on its startup. Some error checking and retries has been kept inside 'send()' and 'recv()' calling methods. Do note that one client will send one set (multiple packets) of data and quit at a time.
Now my questions are:
If the server is running continuously(e.g. windows service) on some PC, do I really need to consider about connection
breakdown(network failure) and creating a new socket and connect
accordingly from client?
If that be so, shall I need to consider resending the data from starting or from the point where client has failed to communicate last
time?
I want to know the general methods what people are using around the world for dealing this kind of situations for network applications.
do I really need to consider about connection breakdown and creating a new socket and connect accordingly from client?
Depends on how precious your data is. If you want to make sure it ended up at the server, and an error occurred while sending, then you can consider it "not sent".
If that be so, shall I need to consider resending the data from starting or from the point where client has failed to communicate last time?
That depends entirely on how your application logic and application protocol work. From your description we can't know how you send your data and how a server would recognize data it has already seen.
do I really need to consider about connection breakdown(network
failure) and creating a new socket and connect accordingly from
client?
You do certainly not need to create a new socket after connection shutdown; you can use the existing socket to connect anew.
Originally my code created a new HttpClient in a using statement on every request. Then I read several articles about reusing HttpClient to increase performance.
Here is an excerpt from one such article:
I do not recommend creating a HttpClient inside a Using block to make a
single request. When HttpClient is disposed it causes the underlying
connection to be closed also. This means the next request has to
re-open that connection. You should try and re-use your HttpClient
instances.
http://www.bizcoder.com/httpclient-it-lives-and-it-is-glorious
It seems to me that leaving a connection open is only going to be useful if multiple requests in a row go to the same places - such as www.api1.com.
My question is, how may HttpClients should I create?
My website talks to about ten different services on the back end.
Should I create a single HttpClient for all of them to consume, or should I create a separate HttpClient per domain that I use on the back end?
Example:
If I talk to www.api1.com and www.api2.com, should I create 2 distinct HttpClients, or only a single HttpClient?
Indeed, Disposing of HttpClient will not forcibly close the underlying TCP/IP connection from the connection pool. Your best performance scenario is what you have suggested:
Keep an instance of HttpClient alive for each back-end service you need to connect to or the lifetime of your application.
Depending on the details you have about the back-end service, you may also want to have a client for each distinct API on that back-end service as well. (API's in the same domain could be routing all over the place.)