I have a asp.net web application which is hosted on two different servers one being primary other being secondary.
Using DNSMadeEasy I have setup a dns failover when primary server goes down secondary server takes up.
This setup is working fine according to the requirements however there is one last catch.
My application uses a windows service for the purpose of billing.
The billing service is always running on primary server and always stopped at secondary server.
I want that when dns failover occurs it should automatically start the billing service on secondary server and when dns switches back to primary the billing service should be stopped on secondary server.
What do I need to do to make this happen?
Changing your architecture a bit would help (as long as you can make these changes):
Put a load balancer (HAProxy, NGINX, etc) in front of both web servers.
Run the ASP.NET app and billing service on both web servers.
Re-architect your billing service so that it can run from both services at the same time without interference.
You're already running the standby, so you'll get lots of benefits to this:
better scalability: load is shared between web servers/billing services
zero-downtime updates: remove each web server one at a time from the load balancer, do your update and then add back into rotation.
robustness: if one web server fails, you automatically have an architecture that doesn't have to worry about DNS TTL problems and will failover automatically.
If you can't really change much of your architecture, here's an idea:
Listen for the first web request to the Secondary web server and then start the service when it occurs. Depending on how it's designed, you may need to stop the billing service on the Primary.
Related
I am working on a web application in C#, ASP.NET, and .NET framework 4.5 with the use of WebSockets. In order to plan for scalability in the future, the application pool has the option for web gardens enabled to simulate multiple web servers on my single development machine.
The issue I am having is how to handle re-connects on the websocket side. When a new websocket session is initially created, the client browser can indirectly lock records in a SQL database. But when the connection is lost, my boss would like the browser to attempt to re-connect to the same instance of the websocket server session so it doesn't need to re-lock anything.
I don't know if something like this is possible because on re-connect the load balancer will "randomly" select which web server to handle the new connection. I was thinking of some hack to work around this but it isn't very clean:
Client opens initial websocket connection on Server A and locks a record.
Client temporarily loses internet connection and the websocket closes. (It is important to note that the server side will wait up to 60 seconds before it "disposes" itself; therefore, the SQL record will remain locked until the 60 seconds has elapsed).
Client internet connection is restored and reconnects to the website but this time on Server B.
Server B sees that this context was initially connected on Server A; therefore, transfers the session to Server A.
Server A checks the process id to see if it is running in the correct worker process (in the case of a web garden).
Server A has found the initial instance and handles the connection.
I tried Googling this question but it doesn't seem like a very common issue because I don't think most websocket web apps keep records locked for as long that my applications does (which is could be up to an hour).
Thanks in advance for all of your help!
Update 3/15/2016
I was hoping that the Server.TransferRequest would have been helpful however it doesn't seem to work for web sockets. Would anyone know of a way to best transfer a websocket context from one process to another?
First, you might want to re-examine why you're locking records for a long time and requiring a client to come back to the same server every time. That is not the usual type of high scale web architecture and perhaps you're just creating this need to reconnect to the identical server because of that requirement when maybe you should rethink how that is designed so that your application would work just fine no matter which host a user connects to.
That would certainly simplify scaling to large numbers of users and servers if you could remove that requirement. You can always then implement local caching and semi-sticky connections later as a performance enhancement, but only after you release the requirement to 100% of the time connect to the same host.
If you're going to stick with that requirement to always connect to the same host, then you will ultimately need some sort of sticky load balancing. There are a lot of different schemes. Some are driven by the networking infrastructure in front of your server, some are driven by your server and some are even client driven. They all have different tradeoffs. Here's a brief run-down of some of the schemes:
Hardware, networking load balancer. Here you have a fairly transparent mechanism by which a hardware load balancer (which is really just software running on a custom piece of hardware) sits in front of your web server farm and uses various techniques to make sure whatever server a given user is originally connected to it will get reconnected to on subsequent connections. This can be based on various schemes (IP address, cookie value, etc...) as the key to identifying a particular user and it typically has a number of possible configurations for how it can work.
Proxy load balancer. This is essentially an all software version of the hardware load balancer. Here a proxy sits in front of your server farm and directs connections to a particular server based on some algorithm (IP address, cookie value, etc...).
Server Redirect. Here an incoming connection is randomly assigned to a server. Upon connection the server figures out where the connection is supposed to be connected to an returns a 302 redirect to the actual host causing the client to reconnect to the proper server. This involves one less layer of infrastructure (no physical load balancers), but exposes the different server endpoints to the outside world which the first two options do not.
Client Selection Algorithm. Here the client is given knowledge of the various server endpoints and is coded with an algorithm for consistently selecting one for this user. It could be a hash of a userID that is then divided into the server bucket pool and the end result is that client ends up choosing a particular DNS name such as cl003.myserver.com which it then connects to. This choice requires the least work server-side so can be simpler to implement, but it requires changing the client code in order to modify the algorithm.
For an article on sticky load balancing for Amazon Web Services to give you an idea on how one mechanism works, you can read this: Elastic Load Balancing: Configure Sticky Sessions for Your Load Balancer.
Here's another article on how the nginx proxy is configured for sticky load balancing.
You can find lots of other articles with a Google search for "sticky load balancing".
A discussion of the pros/cons of the various schemes is the subject of a much longer discussion and some of it involves knowledge of more specific requirements and specific capabilities of your infrastructure.
At my work we have some old 'services' which are actually Console applications which run 24/7 on our servers. Finally we decided to replace it with Windows Services, but now we can see from each console window when there are errors or certain events (like too many requests etc.)
Now the idea came up to let every windows service write in a database every minute (so we can see if the service hasn't stopped), and also write in the database when there's certain events (like the many requests).
Now I've been thinking about hosting a WCF Service in each of the windows services instead, because I think there's more possibilities (and I don't really like the idea of our databases having data inserted from 10+ services every minute) and it's still possible to write an application which sends requests to the WCF services every x amount of time.
Are there any downsides of WCF vs the approach of writing in a database? In a database of course it's easier to keep the data until it's been read, but that's only one (small) issue I can think of.
Thanks in advance!
I would have go with the follow options:
As you suggested, create WCF endpoint and verify you services are on. You will still need to implement error escalation. I like to use ELMAH
Using a watchdog How can I verify if a Windows Service is running
For the events, log services activities using a logger such as log4net or Nlog. Then append it to DB/Email/File. My favorite is to manage logs using ELK (Elastic/Logstash/Kibana) or Splunk.
I'm developing a Client and Server side application in .NET. There are alot of clients which will contact and use one server. Inbetween the client and the server, a Single Sign on functionality is added, which I have no control over, not source code and configuration options. To pass this single sign on, a HttpModule is specific and configured correctly in IIS and the web.config. The problem I'm facing is due to timeouts. When the users are inactive, which means no server calls had been made within a span of ~5 minutes, the connection will timeout and no "new login prompt" is being displayed.
This Single Sign-on portal was primarly build for ASP.NET applications, and the web browser will handle all of the cookie related stuff, I guess.
After some time browsing, I came over the topic WCF Instance Management. As I use wsHttpBinding with Transport as Security, the default instancing mode is perSession.
Does anybody experience such problem? How did you solve it?
Thanks
I read that Signalr on Azure requires a service bus implementation (e.g. https://github.com/SignalR/SignalR/wiki/Azure-service-bus) for scalability purpose.
However, my server only makes callbacks to a single client (the caller):
// Invoke a method on the calling client
Caller.addMessage(data);
If don't need Signalr's broadcasting functionality, is an underlaying service bus still necessary?
The Service Bus dependency is not something specific to Azure. Any time you have multiple servers in play, some of your signalR clients will have created their connection to a specific server. If you want to keep multiple servers in sync something needs to handle the server to server real time communication. The pub-sub model of service bus lines up with this requirement quite well.
dfowleR lists a specific case of this in the comments. Make sure you read down that far!
If you are running on a single server (without the sla on Azure) signalR will work just fine on a Cloud Service Web Role as well as the new Azure Web Sites. I did a screencast on this simple scenario that does not take on a service bus dependency, but only runs on a single server.
In order to support the load balance scenario, is it possible to enstablish a "server to server" SignalR PersistConnection between multiple instances (ie on Azure) ?
If so, we can use a SQL Azure Table where all instances register at startup, so newest can connect to previous ones.
The team I work with has recently migrated from a self hosted setup to IIS hosting of thier web services. The migration went 'smoothly' however we are now seeing some funny behaviour on our server.
If we make a simple request call from our client to our server to get some data from our DB everything works as expected. If we make a call from our client to our server and then the server makes a call to a 3rd party service (hosted off site) then we're seeing a massive increase in response time. A call like this used to take less than a few seconds, since migrating to IIS hosting the response time is over a few minutes.
Has anyone seen this behaviour before? Is it possible that we're having issues with credentials between the IIS hosted server and the 3rd party service?
As long as the bindings haven't changed and you are using the same service identity (i.e. Windows account) then you should get the same performance.
Have you checked whether the service is using static variables and/or multi-threading logic? You could be having resource contention problems with the proxy to the 3rd party service. You'll need to provide more detail about service to get a more specific recommendation.