SignalR scaleout with Azure for chat scenario - c#

For a chat application, I use Azure architecture with SignalR, with the web-role acting as SignalR server (the messages are not broadcast type but are intended for specific user/client).
I want to scale out SignalR server along with the web-roles, to handle heavy user load. Although, SignalR documentation doesn't recommend to use the pre-baked SignalR scale out methods using backplane (Redis, Service bus) for such cases when the number of messages increase as more users are connected (or in user-event driven scenario). It explicitly states: "Client-to-client (e.g., chat): In this scenario, the backplane might be a bottleneck if the number of messages scales with the number of clients; that is, if the rate of messages grows proportionally as more clients join."
Question:
Does anyone know of any custom scale-out solution for such high-frequency case, which doesn't push messages to each server instance or some other scale-out solution?
Already looked everywhere in SignalR documentation and the related videos but couldn't find anything, other than a word "filtered-bus", which was not explained what it is and how it should be used.

I figured it out myself: Basic idea is server affinity/sticky sessions.
Each instance of web-role acts as a stand-alone SignalR server. At the first connection time, I let the Azure load balancer choose any instance of web-role and save the IP address of that web-role instance with the client identifier in a map. If there is another connect request coming from the same client (e.g after page refresh) then I check the IP address of the current role instance and if it matches the entry in map then I let it proceed otherwise I disconnect the client and connects it to the correct instance of web-role.
Each instance of worker-role also acts as SignalR .net client and connects to all the available SignalR servers (all instances of web-role). Before sending a message to the SignalR server (web-role), I look up in the map to determine the correct SignalR server instance (depending on the intended JS recipient).
Benefits:
There is no need of back-plane technology (and hence no delays in message delivery).
Each web-role instance care about the client connected to it and each message doesn't have to be duplicated on every SignalR server. Hence it can scale pretty good.
Easy to implement.

Related

Is SignalR backplane necessary when using sticky sessions?

We have a multi-server, load-balanced environment for our application which uses sticky sessions. We're considering adding a SignalR implementation to send updates to individual clients. I've played around with SignalR a little bit so I'm aware of backplanes. I'm wondering, since we're using sticky sessions, if we do not need to implement a backplane since a single server is handling user requests after authentication.
In your case, a backplane could be used to send messages to a user regardless of which server he's connected to.
I youn don't want to use a backplane, this means you will only be able to send updates to your client from the machine they are connected to, which means each server would have to check for user presence locally before sending a message.

How to transfer context to a WebSocket session on reconnect?

I am working on a web application in C#, ASP.NET, and .NET framework 4.5 with the use of WebSockets. In order to plan for scalability in the future, the application pool has the option for web gardens enabled to simulate multiple web servers on my single development machine.
The issue I am having is how to handle re-connects on the websocket side. When a new websocket session is initially created, the client browser can indirectly lock records in a SQL database. But when the connection is lost, my boss would like the browser to attempt to re-connect to the same instance of the websocket server session so it doesn't need to re-lock anything.
I don't know if something like this is possible because on re-connect the load balancer will "randomly" select which web server to handle the new connection. I was thinking of some hack to work around this but it isn't very clean:
Client opens initial websocket connection on Server A and locks a record.
Client temporarily loses internet connection and the websocket closes. (It is important to note that the server side will wait up to 60 seconds before it "disposes" itself; therefore, the SQL record will remain locked until the 60 seconds has elapsed).
Client internet connection is restored and reconnects to the website but this time on Server B.
Server B sees that this context was initially connected on Server A; therefore, transfers the session to Server A.
Server A checks the process id to see if it is running in the correct worker process (in the case of a web garden).
Server A has found the initial instance and handles the connection.
I tried Googling this question but it doesn't seem like a very common issue because I don't think most websocket web apps keep records locked for as long that my applications does (which is could be up to an hour).
Thanks in advance for all of your help!
Update 3/15/2016
I was hoping that the Server.TransferRequest would have been helpful however it doesn't seem to work for web sockets. Would anyone know of a way to best transfer a websocket context from one process to another?
First, you might want to re-examine why you're locking records for a long time and requiring a client to come back to the same server every time. That is not the usual type of high scale web architecture and perhaps you're just creating this need to reconnect to the identical server because of that requirement when maybe you should rethink how that is designed so that your application would work just fine no matter which host a user connects to.
That would certainly simplify scaling to large numbers of users and servers if you could remove that requirement. You can always then implement local caching and semi-sticky connections later as a performance enhancement, but only after you release the requirement to 100% of the time connect to the same host.
If you're going to stick with that requirement to always connect to the same host, then you will ultimately need some sort of sticky load balancing. There are a lot of different schemes. Some are driven by the networking infrastructure in front of your server, some are driven by your server and some are even client driven. They all have different tradeoffs. Here's a brief run-down of some of the schemes:
Hardware, networking load balancer. Here you have a fairly transparent mechanism by which a hardware load balancer (which is really just software running on a custom piece of hardware) sits in front of your web server farm and uses various techniques to make sure whatever server a given user is originally connected to it will get reconnected to on subsequent connections. This can be based on various schemes (IP address, cookie value, etc...) as the key to identifying a particular user and it typically has a number of possible configurations for how it can work.
Proxy load balancer. This is essentially an all software version of the hardware load balancer. Here a proxy sits in front of your server farm and directs connections to a particular server based on some algorithm (IP address, cookie value, etc...).
Server Redirect. Here an incoming connection is randomly assigned to a server. Upon connection the server figures out where the connection is supposed to be connected to an returns a 302 redirect to the actual host causing the client to reconnect to the proper server. This involves one less layer of infrastructure (no physical load balancers), but exposes the different server endpoints to the outside world which the first two options do not.
Client Selection Algorithm. Here the client is given knowledge of the various server endpoints and is coded with an algorithm for consistently selecting one for this user. It could be a hash of a userID that is then divided into the server bucket pool and the end result is that client ends up choosing a particular DNS name such as cl003.myserver.com which it then connects to. This choice requires the least work server-side so can be simpler to implement, but it requires changing the client code in order to modify the algorithm.
For an article on sticky load balancing for Amazon Web Services to give you an idea on how one mechanism works, you can read this: Elastic Load Balancing: Configure Sticky Sessions for Your Load Balancer.
Here's another article on how the nginx proxy is configured for sticky load balancing.
You can find lots of other articles with a Google search for "sticky load balancing".
A discussion of the pros/cons of the various schemes is the subject of a much longer discussion and some of it involves knowledge of more specific requirements and specific capabilities of your infrastructure.

how to use netMsmqbinding - with server connected scenario

This might look a question where you can read the answer on MSDN, but I still want to ask about the scenario, as I want to solve the business problem.
I have a service hosted on a server, and a client makes service calls. It currently uses netTCP binding. Everything works fine when the service is available, when the server is up and running. Now, I need to handle the server down scenario. I use the local cache file on the client to serve the client requests in case of server down scenario. Now I want to cache all the requests made while server down and want to make service calls once server is up and running.
I am thinking about using the netMsmqBinding, because all I've read suggests that it works well in the disconnected scenario.
Q.1 Can I use the netMsmq to handle this scenario?
Q.2 If not then what could be another approach with which I can follow to solve this problem?
Q.3 Can I use WS-Discovery in case of server down to find that the client calls won't be able to contact the service?
EDIT : The scenario is Client-Server. But i do need to give response on every call to the client. The client is also developed and maintained by me only so i am in a good position to implement the best suitable solution.
Please guide me as I'm not too good with WCF.
Yes, you can use netMsmqBinding for this purpose. We are doing that for services running over a satellite link that can be down often.
One important limitation you need to take into account is that all calls must be one way, being a queue-based transport. If you need to get the results of a request, you'll have to provide a separate response mechanism (it can be a similar queue in the opposite direction)
Ad question 1: using MSMQ is excellent for a scenario where the service may not always be up and running. Note that the server that hosts the message queue must be up and reachable to receive the messages. However, you haven't told us anything else about your scenario, particularly why you currently have NetTCP. The reason that's important, is because there are some things you can not do with MSMQ, for example duplex communication won't work out of the box.
Ad question 2: an alternative may be to implement logic in the client (it's unclear from the question if you're the owner of the client software) to have a local queue and retry messages later if a service is (temporarily) offline. I guess you may even have a proxy MSMQ service on the client, relaying the messages to the main service once it's up.
Ad question 3: yes, you can use Discovery for this. The service will have to announce to the clients when it goes online or offline. The simplest example is using the UdpAnnouncementEndpoint. In the clients you can use the AnnouncementService class to listen to the service coming online or offline, and keep a local list of available services. Alternatively (for example when UDP broadcasts aren't feasible) you can create a discovery proxy service at a well known location that listens to announcements, which the clients can access for instant-knowledge on whether the service they need is online

Multi Client/Server Discovery in C#

I'm developing a multiple client / multiple server program in C#, and before I got down to the nitty gritty, I was wondering if anyone has ever worked on a similar project and might be able to share their tips / ideas for implementation.
The servers will sit on many PCs, and listen for incoming connections from clients (Or should the Servers broadcast, and the clients listen?).
When a client starts, it should populate a list of potential server IP addresses automatically.
When a server closes, the client should remove that server from it's list.
When a new server starts, the clients should be notified and have it added to their list.
A server may also act as a client, and should be able to see itself, as well as all other servers.
A message sent from a client to the server, that affects the server, should broadcast the change to all connected clients.
Should my server be a Windows Service? What advantages/disadvantages does that present?
Any ideas on how I might go about getting started on this? I've been looking into UDP Multicast, and LAN Scans. I'm using C# and .NET 4.0
EDIT: Found this: http://code.google.com/p/lidgren-network-gen3/ Does anyone have any experience with it and can recommend/not recommend it?
I would suggest NetPeerTcpBinding WCF communications to create a Peer Mesh. Clients and Servers would all join a mesh using a Peer resolver. You can use PNRP or create a custom peer resolver (.Net actually provides you with an implementation called CustomPeerResolverService). See Peer To Peer Networking documentation.
Also you can implement a Discovery service using DiscoveryProxy. With a discovery service, services can announce their endpoints. The discovery service can then service find requests (see FindCriteria) to return endpoints that match the requests. This is referred to as Managed Discovery. Another mode is Ad Hoc Discovery. Each service will announce their endpoints via UDP and discovery clients will probe the network for these endpoints.
I have actually implemented a Managed Discovery service in combination with Peer 2 Peer WCF networking to provide a redundant mesh of discovery services that all share published service endpoints via P2P. Using Managed Discovery I have found performs far better as Ad Hoc Discovery using UDP probing is slower and has some limitations crossing some network boundaries while Managed Discovery leverages a centralized repository of announced service endpoints.
Either/both technologies I think can lead to your solution.
So is this effectively a peer to peer style network (almost like bittorrent), where all servers are clients, but not all clients are servers.
and the requirements are every client should hold a list of all other servers (which are, in turn, clients).
The problem lies in getting the server IPs to the clients in the first place. You can use a master server that has a fixed DNS to act as a kind of tracker, which all of the servers check in to, and the clients check periodically.
Another option (or an additional method) is to use a peer exchange style system, where each of the clients and servers use UDP broadcast packets over a local network to discover each other and then transfer the servers they know of, kind of like a routing protocol. However if the PCs are spread out over a non local network such as the internet, there's little chance that they will ever discover each other on their own, making this method only useful when used in conjunction with other methods of finding servers. Also, you will probably have to deal with router UPnP to allow clients to connect to each other through each others router NAT, so this method is probably too complex for the gains you get. (However, if you're just on a LAN, this is all you need!)
A third option (and again, this sounds a lot like torrent technology), is to use Distributed Hash Tables to store information about the IPs of your servers in the cloud, without having to rely on a central master server.
I have had a shot at a project like this before (a pure P2P, server-less messaging system), but could never get it to work. Without a huge amount of peers, or a master server to track all of the other servers, it is very difficult to reliably retrieve the IPs of all the servers.

How to establish 2-way communication between a web server and a site server?

I am planning a SaaS system, to be written in C#, ASP.NET using WCF that has two separate components:
On a static IP web server in the cloud will be a web app, common to all clients.
Inside each client's office will be another app, installed on a server with IIS.
The site app will obviously be able to connect to the web services published on the web site. But here's the rub - I also want the web app to be able to initiate a connection to the site app... and the on-site server may not necessarily have a static IP. I can't control this, because we may have hundreds of clients at some point in the future, and we cannot limit our saleability by insisting that the customer has a server with fixed IP.
So, how to do this?
I could have the site apps "checking in" with the web every minute or so, to give the web app the possibility of responding with a "while you're here, please do x,y,z..." but that seems very inelegant. Also, if we're talking about hundreds of clients, I don't want to be bombarding my web server with all these "hi there!" messages if they're not actually required.
Is there a better way?
WCF? Here we go:
Use a message based approach (exchange message, no stateful method calls).
Clients connect to the server. Establish a HTTP-based TWO WAY CONNECTION. This way the server can call back to connected clients. This is standard WCF stuff and works well through NAT with version 4 of the .NET framework.
Voila. In case of a disconnect the client can re-connect, re-identify himself and gets the pending messages.
IIRC "push communication" is done by letting the client do a HTTP Request with an indefinate timeout. Then the server responds when he has something to say. After the respons the client immediately makes a new request.
It works out the same way like the server is making the connection and takes far less resources than polling.
Dynamic DNS is one possibility, but depends on your clients/customers.
If the site app is created by you, it only has to contact the web server when its address has changed (or when the site server/web app is restarted). Still, a keep-alive heart beat of, say, every 30 min. to 1 hour isn't a bad idea.
Edit: I think SNMP services may provide the answer but I'm not a networking expert. You'll have to do some digging or ask a separate question on stackoverflow.
What would you say about Comet technology?
Sounds like you'll definitely need some sort of registry on the server, then it could attempt to call out to the client apps if it needs work doing.
Generally it is client apps that check in with the server every X seconds - this is how Selenium grid works anyway. With a central hub with which clients register. When the hub receives a request to run some tests it passes the jobs out to the clients to perform.
You may not need the "checking in". The server could just attempt to call out to a registered client app until it finds one that is available.This way only the server would need a static address (could use a DNS name instead of an IP to make it more robust).
Also have a look at XMPP PubSub. This could be a more robust and standardised way to handle this.
In the end I decided to go with NetTcpBinding, for reasons best given by #Allon Guralnek here. It's worth clicking through and reading what he has to say...

Categories