I have a Windows service that currently outputs logging activity to either a text file or a database (depending on the activity). What I would like to do is to have a way to run another process (probably an executable) that can connect to that service and receive activity updates from that Windows service using like a publish/subcribe approach.
In theory, I guess this can be done by hosting a socket connection on the Windows service and pushing activity data as it happens. I wonder though if there is s better approach? Is there maybe a framework that can do all this for me easily? Or maybe I should use a MQ product to broadcast the application activity?
I am using C# .net version 4.5
There are several ways:
Socket is very good two-way communication
WCF is also another option, also two way communication support
Database - if you need to keep the history of the signals, you probably use one table where server/host inserts into a table and client reads from table using SQL Dependency. You can read new signals without timer or waiting, almost in real time.
Another good option is SignalR.
I have been using all technologies except WCF.
Related
I'm facing an issue in a High Available Windows Services I developed with a master/slave setup.
Context:
The services itself synchronize data to two endpoints. One endpoint is synced to a local database, and one is an external. The database that is local is duplicated on both machines, so both master and slave need to sync this. The external endpoint only needs to be synced once.
The Master will by default sync to the external service, the slave will take over when the master is down. When the master goes back up and the slave is still synchronizing to the external, master will ask slave to finish a portion of the work, and then tell the master is done so he can continue the remaining work.
All this needs to happen asynchronously, I do not want the program to stop and wait for the other to respond (like the slave still handling the data).
I already implemented all the logic for this.
Setup:
Two Windows services running on two different machines.
Currently the communication is done over Named Pipes.
The problem:
Named pipes isn't reliable enough for the throughput that is being done. It also often crashes, and isn't made for reconnecting/closing and reopening a lot of times. I also face the problem that it just 'hangs' a lot when sending/receiving messages. Retrying sometimes works but I think I shouldn't be retrying. I need to have a reliable communication between the two instances.
Solutions:
I've been looking for an alternative to Named Pipes, but can't seem to find a solution of which I'm convinced that would work. Mostly because a lot of the technologies are for communication between a service and a client over http.
WCF over MSMQ is also not what I need, because I only want communication to happen when both are online. WCF in general is also more focused on one endpoint receiving data and sending a response. I need bidirectional communication, so both instances need to be able to receive and send messages at any time.
I think my best option is SignalR, but I'm also not convinced.
Have you looked at MassTransit over RabbitMQ?
We have been using them together very successfully both for intra-service and client/service communication for a few years now.
I am trying to use web sockets to allow two Windows services on different machines to pass data back and forth. Almost all the examples or information I have found are about using web sockets for Client/Server Side communication. I am having trouble figuring out how to set this up. I have considered using WebSocketHost as apart of Microsoft.ServiceModel.WebSockets, but then I am unsure how to bind it to a local port and not a URL.
Does any one have any suggestions
Thanks
I am trying to use web sockets to allow two Windows services on different machines to pass data back and forth.
You can open sockets on both machines using WebSockets as you found. The examples mention clients and servers because this is the typical usage, however the API really doesn't care. As long as each side has a listener and a sender they can communicate.
However I would like to mention that this isn't as simple as it sounds because both machines aren't always available. Sometimes one or the other is busy or the network is blocked or something else is going on, or the listener is too busy to respond right away, so you're going to end up needing some sort of queuing on both sides.
If you're doing a process based operation where one side tells the other "I want X" and it's a big operation like producing a document, I've found it much more resilient to build a queue in a database and toss the request in there, then wait for the other side to update the record to say it's done.
If they're smaller, faster requests, MSMQ would be more appropriate if you have it available.
However back to your original question, if you want to use it, any of the client-server examples should work just fine. The API doesn't care.
You can use SignalR Self-Host you really don't want to create your own WebSockets framework since this this will take a long time.
Here is a link on how to start a OWIN server in Windows services.
Hosting WebAPI using OWIN in a windows service
And how to set signalR in self host
Tutorial: SignalR Self-Host
You can accomplish this with Memory Mapped Files.
Inter-Process Communication with Memory-Mapped Files
I have a LAN composed of 3 PC. Installed in PC1 is the MS SQL database. This computer will act as the server.
The PC2 and PC3 will each have a desktop application that will display the data from PC1.
My problem here is how to make each PC (PC2 and PC3) have the same copy of data.
Suppose in PC2 employee 0001 first name is updated from John to Peter and commit save. Without refreshing the application in PC3, employee 0001 will still have John for the first name.
What would be the best approach for this? My level in programming is not that good, but I'm open to all suggestions/concepts/example/etc..
Thanks.
If you want immediate update on all clients right after data changes then you need some sort of notification system either in a polled or pushed manner.
You can implement push mechanism using for example WCF with callback contract. Your client PCs would need to implement relevant callback interface and be constantly connected to server PC's WCF service. Callback call could actually carry the new data. Each client needs to filter out notifications which resulted from that client's own changes. Push mechanism is quick and efficient way.
Check this stackoverflow answer for example of WCF callback.
Pull mechanism would require a background thread on all client applications checking the server for changes. You can use a separate database table with a version counter that would get incremented each time anything changes on the server. Client applications would poll that counter, compare with latest version they have and update the data when new version is discovered. It is much less effective mechanism though as you need to do the polling frequently and get all the data each time there is a new version. You can make versioning more sophisticated and detect what exactly changed but that can get complicated quickly with multiple clients. Overall it does not scale very well. It is generally simpler than push though and for simple applications with not too much data it would be enough.
You need to tell the other machines when to update. This could be accomplished by simple messages sent over the network using UDP broadcast. Then the other PC could execute its refresh method.
well utivich...this is the same thing as a web application really. it's a common problem. usually the other clients will have stale data until the record is reloaded, or when they save maybe the server will throw an exception on stale data based on sql timestamp. however, with a desktop application you can setup a system with event notification just like a chat application where the server pushes events to subscribers and the clients will be able to update the record or whatever you need to do.
I have the following architecture for a project I'm working on.
My question is how to begin implementing the TCP/IP responder part.
It's function, in case the diagram is hard to read, is to wait for a connection from the Order Viewing client, and subsequently notify said client of incoming orders.
I was thinking a queue, but unfortunately I don't know where something like this would fit in the VS2008 hierarchy of things.
If it's part of the ASP.NET web page, should I use the application start event to start the TCP IP responder?
It's not a web service, because those respond to http requests...
If I had to implement your "TCP responder" I'd probably implement it as a windows service and have both the ASP.NET app and the Winform client contact it (e.g. to avoid the problem of recycling of the ASP.NET etc.)
That said, I can think of gazillion easier ways to get the effect you want to achieve (getting the winform client to know about new orders) such as
Using Queues as you mentioned. Windows comes with MSMQ (you need to enable it in add windows features). Using MSMQ from C# is fairly easy. You can also use WCF if you like
exposing an http endpoint on the client and have the client notify the ASP.NET server where it is listening by calling one of its pages
write the orders to the DB and poll it from the client/use System.Data.SqlClient.SqlDependency to know when there's a change
Heck even writing the orders to a file on a shared folder with a FileSystemWatcher would work (though I'd probably wouldn't recommend that)
Why don't you use http? You already have the http server so you don't need any TCP responder - just do http polling at the client.
And if you don't want polling or have too many clients then you can use something like SignalR for notifications.
I have 50+ kiosk style computers that I want to be able to get a status update, from a single computer, on demand as opposed to an interval. These computers are on a LAN in respect to the computer requesting the status.
I researched WCF however it looks like I'll need IIS installed and I would rather not install IIS on 50+ Windows XP boxes -- so I think that eliminates using a webservice unless it's possible to have a WinForm host a webservice?
I also researched using System.Net.Sockets and even got a barely functional prototype going however I feel I'm not skilled enough to make it a solid and reliable system. Given this path, I would need to learn more about socket programming and threading.
These boxes are running .NET 3.5 SP1, so I have complete flexibility in the .NET version however I'd like to stick to C#.
What is the best way to implement this? Should I just bite the bullet and learn Sockets more or does .NET have a better way of handling this?
edit:
I was going to go with a two way communication until I realized that all I needed was a one way communication.
edit 2:
I was avoiding the traditional server/client and going with an inverse because I wanted to avoid consuming too much bandwidth and wasn't sure what kind of overhead I was talking about. I was also hoping to have more control of the individual kiosks. After looking at it, I think I can still have that with WCF and connect by IP (which I wasn't aware I could connect by IP, I was thinking I would have to add 50 webservices or something).
WCF does not have to be hosted within IIS, it can be hosted within your Winform, as a console application or as windows service.
You can have each computer host its service within the winform, and write a program in your own computer to call each computer's service to get the status information.
Another way of doing it is to host one service in your own computer, and make the 50+ computers to call the service once their status were updated, you can use a database for the service to persist the status data of each node within the network. This option is easier to maintain and scalable.
P.S.
WCF aims to replace .net remoting, the alternatives can be net.tcp binding or net.pipe
Unless you have plans to scale this to several thousand clients I don't think WCF performance will even be a fringe issue. You can easily host WCF services from windows services or Winforms applications, and you'll find getting something working with WCF will be fairly simple once you get the key concepts.
I've deployed something similar with around 100-150 clients with great success.
There's plenty of resources out on the web to get you started - here's one to get you going:
http://msdn.microsoft.com/en-us/library/aa480190.aspx
Whether you use a web service or WCF on your central server, you only need to install and configure IIS on the server (and not on the 50+ clients).
What you're trying to do is a little unclear from the question, but if the clients need to call the server (to get a server status, for example), then they just call a method on the webservice running on the server.
If instead you need to have the server call the clients from time to time, then you'll need to have each client call a sign-in method on the server webservice each time the client starts up. The sign-in method would take a delegate method from the client as a parameter. The server would then call this delegate when it needed information from the client.
Setting up each client with its own web service would represent an inversion of the traditional (one server, multiple clients) client/server architecture, and as you've already noted this would be impractical.
Do not use remoting.
If you want robustness and scalability you end up ruling out everything but what are essentially stateless remote procedure calls. Since this is exactly the capability of web services, and web services are simpler and easier to build, remoting is an essentially pointless technology.
Callbacks with remote delegates are on the performance/reliability forbidden list, so if you were thinking of using remoting for that, think again.
Use web services.
I know you don't want to be polling, but I don't think you need to. Since you say all your units are on a single network segment then I suggest UDP for broadcast change notifications, essentially setting a dirty flag, and allowing the application to (re-)fetch on demand. It's still not reliable but it's easy and very fast because it's broadcast.
As others have said you don't need IIS, you can self-host. See ServiceHost class for details on how to do this.
I'd suggest using .NET Remoting. It's quite easy to implement and doesn't require anything else.
For me its is better to learn networking.. or the manual way of socket communication.. web services are mush slower because it contains metadata..
your clients and the servers can transform to multithreaded application. just imitate the request and response architecture. it is much easy to implement a network application like this..
If you just need a status update, you can use much simpler solution, such as simple tcp server/client messaging or like orrsella said, remoting. WCF is kinda overkill here.
One note though, if all your 50+ kiosk is connected via internet, then you might need use VPN or have an open port on each kiosk(which is a security risk) so that your server can retrieve status update from each kiosk.
We had a similiar situation, but the status is send to our server periodically, so we only have 1 port to protect/secure. The frequency of the update is configurable as to accomodate slower clients.
As someone who implemented something like this with over 500+ clients and growing:
Message Queing is the way to go.
We have gone from an internal developed TCP server and client to WCF polling and ended up with Message queing. It's the only guaranteed way to get data to and from clients and servers over the internet. As a bonus, many of these solutions have an extensive framework makeing it trivial to implement publish-subscribe, Send-one-way, point-to-point sending, Request-reply. Some of these are possible with WCF but it will involve crying, shouting, whimpering and long nights not to mention gallons of coffee.
A couple of important remarks:
Letting a process poll the clients instead of the other way around = Bad idea.. it is not scalable at all and you will soon be running in to trouble when the process is take too long to complete.. Not to mention having to handle all the ip addresses ( do you have access to all clients on the required ports ? What happpens when the ip changes etc..)
what we have done: The clients sends status updates to a central message queue on a regular interval ( you can easily implement live updates in the UI), it also listens on it's own queue for a GetStatusRequest message. if it receives this, it answers ( has a timeout).. this way, we can see overal status of all clients at all times and get a specific status of a specific client when needed.
Concerning bandwidth: kiosk usually show images/video etc.. 1Kb or less status messages will not be the big overhead.
I CANNOT stress enough that the current design you present will have a very intensive development cycle AND will not scale or extend well ( trust me, we have learned this lesson). Next to this, building a good client/server protocol for this type of stuff is a hard job that will be totally useless afterwards if you make a design error ( migrating a protocol is not easy)
We have built our solution ontop of ActiveMQ ( using NMS library c#) and are currently extending Simple Service Bus for our internal workings.
We only use WCF for the communication between our winforms app and the centralized service(s)