So it's easy to load balance an ASP.NET web application. You set up a load balancer between two servers, and if the web server isn't responding on Port 80, it won't receive requests.
Are there any proven techniques for doing this for a C# console application or Windows service that takes actions of its own volition? Are there any frameworks for knowing if peer processes are alive or dead, doing heartbeats, etc?
I've been experimenting a bit with NServiceBus and it seems like, for certain kinds of applications, it would help to have most of the work done as a response to an event, which makes it more like a web application, actually, and therefore easier to scale and load balance with multiple processes, but I feel like that's a half-baked solution since in most cases there usually needs to be some concept of a "master" process that's responsible for getting work started.
NServiceBus does indeed handle this for you with its Distributor process (described here: http://docs.particular.net/nservicebus/scalability-and-ha/distributor/). The generic host that comes with NServiceBus allows you to have the exact same code and configuration run both as a console app and as a windows service (described here: http://docs.particular.net/nservicebus/hosting/nservicebus-host/).
You can have this for events as well as for regular command messages.
If you want a "master" process to decide what to do when all the load-balanced work completes, that is provided to you in the form of the saga infrastructure (described here: http://docs.particular.net/nservicebus/sagas/ and demonstrated in the Manufacturing sample that comes with NServiceBus).
In short, you should pretty much be covered.
Related
I want to design an application that serves a REST API and also has a continuous process running that connects to websockets and processes the incoming data.
I have two approaches in mind:
Create a Windows Service with Kestrel running on one thread and the websocket listener on another. The API would be made accessible via a IIS reverse proxy.
Create the REST API with ASP.NET directly hosted in IIS and utilize the BackgroundService Class for the websocket listener as described here.
As I am new to the Windows Ecosystem I'd like to know if one of the approaches is more suitable or if I'm going about it the wrong way.
My understanding is that the Windows service approach should just work, but it seems more elaborate.
I'm unsure about the BackgroundService approach. The background process should really run 24/7. Are BackgroundServices designed for this? The docs always talk about long running tasks, but does it also work for infinite running ones with restart on failure etc.?
I'd recommend to host the continuous process in a Windows service as you have much more control over the lifecycle.
With a BackgroundService hosted on IIS, the process is controlled by IIS. In this case, it might be recycler from time to time or terminated of idle for some time. You can control this behavior with some configuration settings, but especially in combination with ASP.NET Core, the IIS process might be running, but the underlying Kestrel service is only started when a request hits the website.
If the two components do not rely on each other, you could also split them and have the best of both worlds, the web application hosted in IIS and the websocket listener running in a Windows service
I am trying to use web sockets to allow two Windows services on different machines to pass data back and forth. Almost all the examples or information I have found are about using web sockets for Client/Server Side communication. I am having trouble figuring out how to set this up. I have considered using WebSocketHost as apart of Microsoft.ServiceModel.WebSockets, but then I am unsure how to bind it to a local port and not a URL.
Does any one have any suggestions
Thanks
I am trying to use web sockets to allow two Windows services on different machines to pass data back and forth.
You can open sockets on both machines using WebSockets as you found. The examples mention clients and servers because this is the typical usage, however the API really doesn't care. As long as each side has a listener and a sender they can communicate.
However I would like to mention that this isn't as simple as it sounds because both machines aren't always available. Sometimes one or the other is busy or the network is blocked or something else is going on, or the listener is too busy to respond right away, so you're going to end up needing some sort of queuing on both sides.
If you're doing a process based operation where one side tells the other "I want X" and it's a big operation like producing a document, I've found it much more resilient to build a queue in a database and toss the request in there, then wait for the other side to update the record to say it's done.
If they're smaller, faster requests, MSMQ would be more appropriate if you have it available.
However back to your original question, if you want to use it, any of the client-server examples should work just fine. The API doesn't care.
You can use SignalR Self-Host you really don't want to create your own WebSockets framework since this this will take a long time.
Here is a link on how to start a OWIN server in Windows services.
Hosting WebAPI using OWIN in a windows service
And how to set signalR in self host
Tutorial: SignalR Self-Host
You can accomplish this with Memory Mapped Files.
Inter-Process Communication with Memory-Mapped Files
I am trying to create a monitoring application for our operations department to be proactive when dealing with systems that are encountering problems. I created an app that does the job but it has some draw backs:
Each copy of the app running serves individual pings to the systems, when 1 ping would suffice.
I have 3 different api's for getting the status of our systems depending whether its hosted IIS, WCF or desktop.
To fix the first issue i was going to create a database which an interim service(app)(monitor) would make the pings, then the app would query the database for updates. After thinking about this I realized the second issue and decided it is a future problem.
So my thought was to, rather than have the interim application pinging the systems, simply have each system have one interface in which it posts it status to the database every x time. But then I ran into a problem with the WCF and IIS services we have. These services can sit for days without anyone actually using the service. How would I make these services continue to post its data?
My questions are:
Is it better to have data REQUESTED or PUSHED in this type of situation?
If REQUESTED, what is a suggested practice for maintaining a single API across mulitple platforms(IIS, WCF, Desktop)?
If PUSHED, how would you handle the case of the Web services which are instance based and not continuously running?
For web services, one solution might be to implement a health-check end point , something that you can simply call like: webservice/isServiceUp?
I prefer that this information is PULLED. If a service / web Service/ Application is down, then you can't possibly rely on it to write something to the DB... it would be possible but highly risky and unreliable.
In a real world situation, it is a little more complicated than that because something might happen between your service host and the consumer (DNS problem for example), in which case, you would want to consider the case of not getting anything back from the isServiceUp (no true no false, just a 400 lvl error)...
Consider using your load balancer for checking on APPS / web services and proactively switching to a different IP in case of issues... it is a possibility.
To be precise: I have a .NET web forms system. I need a way to check some values and perform tasks, depending on these values in periodic manner. Let's say: Every month I have to check if my customers credit cards are still valid. There some other tasks/checking in short periods.
What is the best approach to the subject. I thought about Windows Service but I read about WCF. Please advise what is the modern and good way to solve this task. I'm thinking about .NET 4.0.
WCF is just an interface that can run in either Windows Service or IIS. You use this WCF interface to trigger some synchronous or asynchronous actions.
Your case sounds like you want a Windows Service on timer to perform validation on data stored in a data base or file.
If you want to start a process on demand then adding a WCF endpoint might be useful, if the timer approach is good enough, then you need not bother with WCF.
References for hosting WCF in Windows Process
microsoft.com
codeproject.com
As you've surmised, a Windows Service is a good approach to this problem.
Similarly, you could write a Console application and have it run via a scheduled task in Windows.
It depends on how your backend works and what you're most familiar with really.
Writing a console application is very simple to do, but it's not perhaps the best approach as you need to ensure that a user is logged on so that the scheduled task can run.
A service is slightly more complicated to implement, but it has the benefits of being integrated into the OS properly.
MSDN has a good guide to writing a service in C#, and you don't necessarily need WCF:
http://msdn.microsoft.com/en-us/library/aa984464(v=vs.71).aspx
You could use something like quartz.net. See link - http://quartznet.sourceforge.net/
If you have limited control over server (i.e. only regular HTTP pages allowed):
You can also use a web page to trigger the task - this way you don't need any additional components installed on server. Than have some other machine configure periodic requests to the page(s) that trigger tasks. Make sure that tasks are restartable and short enough - so you can finish each on regular page request. Page can respond with "next task to run" data so your client page can continue pinging server till whole operation is finished.
Note: Trying to run long running tasks inside web service process is unreliable due to app pool/app domain recycles.
I have 50+ kiosk style computers that I want to be able to get a status update, from a single computer, on demand as opposed to an interval. These computers are on a LAN in respect to the computer requesting the status.
I researched WCF however it looks like I'll need IIS installed and I would rather not install IIS on 50+ Windows XP boxes -- so I think that eliminates using a webservice unless it's possible to have a WinForm host a webservice?
I also researched using System.Net.Sockets and even got a barely functional prototype going however I feel I'm not skilled enough to make it a solid and reliable system. Given this path, I would need to learn more about socket programming and threading.
These boxes are running .NET 3.5 SP1, so I have complete flexibility in the .NET version however I'd like to stick to C#.
What is the best way to implement this? Should I just bite the bullet and learn Sockets more or does .NET have a better way of handling this?
edit:
I was going to go with a two way communication until I realized that all I needed was a one way communication.
edit 2:
I was avoiding the traditional server/client and going with an inverse because I wanted to avoid consuming too much bandwidth and wasn't sure what kind of overhead I was talking about. I was also hoping to have more control of the individual kiosks. After looking at it, I think I can still have that with WCF and connect by IP (which I wasn't aware I could connect by IP, I was thinking I would have to add 50 webservices or something).
WCF does not have to be hosted within IIS, it can be hosted within your Winform, as a console application or as windows service.
You can have each computer host its service within the winform, and write a program in your own computer to call each computer's service to get the status information.
Another way of doing it is to host one service in your own computer, and make the 50+ computers to call the service once their status were updated, you can use a database for the service to persist the status data of each node within the network. This option is easier to maintain and scalable.
P.S.
WCF aims to replace .net remoting, the alternatives can be net.tcp binding or net.pipe
Unless you have plans to scale this to several thousand clients I don't think WCF performance will even be a fringe issue. You can easily host WCF services from windows services or Winforms applications, and you'll find getting something working with WCF will be fairly simple once you get the key concepts.
I've deployed something similar with around 100-150 clients with great success.
There's plenty of resources out on the web to get you started - here's one to get you going:
http://msdn.microsoft.com/en-us/library/aa480190.aspx
Whether you use a web service or WCF on your central server, you only need to install and configure IIS on the server (and not on the 50+ clients).
What you're trying to do is a little unclear from the question, but if the clients need to call the server (to get a server status, for example), then they just call a method on the webservice running on the server.
If instead you need to have the server call the clients from time to time, then you'll need to have each client call a sign-in method on the server webservice each time the client starts up. The sign-in method would take a delegate method from the client as a parameter. The server would then call this delegate when it needed information from the client.
Setting up each client with its own web service would represent an inversion of the traditional (one server, multiple clients) client/server architecture, and as you've already noted this would be impractical.
Do not use remoting.
If you want robustness and scalability you end up ruling out everything but what are essentially stateless remote procedure calls. Since this is exactly the capability of web services, and web services are simpler and easier to build, remoting is an essentially pointless technology.
Callbacks with remote delegates are on the performance/reliability forbidden list, so if you were thinking of using remoting for that, think again.
Use web services.
I know you don't want to be polling, but I don't think you need to. Since you say all your units are on a single network segment then I suggest UDP for broadcast change notifications, essentially setting a dirty flag, and allowing the application to (re-)fetch on demand. It's still not reliable but it's easy and very fast because it's broadcast.
As others have said you don't need IIS, you can self-host. See ServiceHost class for details on how to do this.
I'd suggest using .NET Remoting. It's quite easy to implement and doesn't require anything else.
For me its is better to learn networking.. or the manual way of socket communication.. web services are mush slower because it contains metadata..
your clients and the servers can transform to multithreaded application. just imitate the request and response architecture. it is much easy to implement a network application like this..
If you just need a status update, you can use much simpler solution, such as simple tcp server/client messaging or like orrsella said, remoting. WCF is kinda overkill here.
One note though, if all your 50+ kiosk is connected via internet, then you might need use VPN or have an open port on each kiosk(which is a security risk) so that your server can retrieve status update from each kiosk.
We had a similiar situation, but the status is send to our server periodically, so we only have 1 port to protect/secure. The frequency of the update is configurable as to accomodate slower clients.
As someone who implemented something like this with over 500+ clients and growing:
Message Queing is the way to go.
We have gone from an internal developed TCP server and client to WCF polling and ended up with Message queing. It's the only guaranteed way to get data to and from clients and servers over the internet. As a bonus, many of these solutions have an extensive framework makeing it trivial to implement publish-subscribe, Send-one-way, point-to-point sending, Request-reply. Some of these are possible with WCF but it will involve crying, shouting, whimpering and long nights not to mention gallons of coffee.
A couple of important remarks:
Letting a process poll the clients instead of the other way around = Bad idea.. it is not scalable at all and you will soon be running in to trouble when the process is take too long to complete.. Not to mention having to handle all the ip addresses ( do you have access to all clients on the required ports ? What happpens when the ip changes etc..)
what we have done: The clients sends status updates to a central message queue on a regular interval ( you can easily implement live updates in the UI), it also listens on it's own queue for a GetStatusRequest message. if it receives this, it answers ( has a timeout).. this way, we can see overal status of all clients at all times and get a specific status of a specific client when needed.
Concerning bandwidth: kiosk usually show images/video etc.. 1Kb or less status messages will not be the big overhead.
I CANNOT stress enough that the current design you present will have a very intensive development cycle AND will not scale or extend well ( trust me, we have learned this lesson). Next to this, building a good client/server protocol for this type of stuff is a hard job that will be totally useless afterwards if you make a design error ( migrating a protocol is not easy)
We have built our solution ontop of ActiveMQ ( using NMS library c#) and are currently extending Simple Service Bus for our internal workings.
We only use WCF for the communication between our winforms app and the centralized service(s)