I'm designing a client-server architecture which is implemented using Windows Communication Foundation. In one of the use cases, the server needs to request the status of the client(s), which means it needs to call the SendStatus() method on the client and ask for its status. I was just wondering if this use case can be implemented using WCF, without creating a standalone service on the client side. I'm trying to avoid sockets because the client is a background service and is essentially always connected to the server. I understand that WCF eventually uses sockets for communication, but I'm specifically trying to use WCF since this is more like a proof of concept.
A workaround I thought of was that the client could call the SendClientStatus() method on the server and send its status every 5 seconds or so. But then again this doesn't seem like a good approach. Any help would be appreciated.
In the world of WCF, you have more or less two options.
A) A Duplex service with Dual Http Binding
B) A no-return-value polling scheme - this is essentially what you described. The naive implementation, as you correctly note, is not that great, but there are optimizations. Since you do not need anything returned from SendClientStatus (correct?) , you can optimize the communication by only sending an update when there is one - e.g. as long as the status of the client remains the same, nothing is sent to the server. Depending on the frequency with which client status changes, this can greatly reduce the traffic. Duplex services present some extra configuration you want to avoid unless you really need them.
Related
I am trying to use web sockets to allow two Windows services on different machines to pass data back and forth. Almost all the examples or information I have found are about using web sockets for Client/Server Side communication. I am having trouble figuring out how to set this up. I have considered using WebSocketHost as apart of Microsoft.ServiceModel.WebSockets, but then I am unsure how to bind it to a local port and not a URL.
Does any one have any suggestions
Thanks
I am trying to use web sockets to allow two Windows services on different machines to pass data back and forth.
You can open sockets on both machines using WebSockets as you found. The examples mention clients and servers because this is the typical usage, however the API really doesn't care. As long as each side has a listener and a sender they can communicate.
However I would like to mention that this isn't as simple as it sounds because both machines aren't always available. Sometimes one or the other is busy or the network is blocked or something else is going on, or the listener is too busy to respond right away, so you're going to end up needing some sort of queuing on both sides.
If you're doing a process based operation where one side tells the other "I want X" and it's a big operation like producing a document, I've found it much more resilient to build a queue in a database and toss the request in there, then wait for the other side to update the record to say it's done.
If they're smaller, faster requests, MSMQ would be more appropriate if you have it available.
However back to your original question, if you want to use it, any of the client-server examples should work just fine. The API doesn't care.
You can use SignalR Self-Host you really don't want to create your own WebSockets framework since this this will take a long time.
Here is a link on how to start a OWIN server in Windows services.
Hosting WebAPI using OWIN in a windows service
And how to set signalR in self host
Tutorial: SignalR Self-Host
You can accomplish this with Memory Mapped Files.
Inter-Process Communication with Memory-Mapped Files
we are evaluating a new project which will have a .NET Server which is available in the internet. We have access to the server but the hosting is done by a 3rd party company.
We are evaluating using WCF on the .NET Server. (I have no professional experience with WCF and just reading into the topic). The WCF service will talk to a SQL Server to perform its duties.
Here is the scenario:
Multiple client machines running our own ActionScript software will connect to that .NET Server.
Clients might be online 24/7 and should periodically poll our server to tell the server that they are there.
A client needs to be able to login, and only if the login has worked the other calls will be allowed and at some point it logs out. So we need to "remember" the state with a particual client...
Highest expected load is around 1000 Clients, of which 500 will only do polling while the other 500 will be "active". "Active" means a maximum of 1 call each minute, no heavy payload in each call, neither in the request nor in the response, just 1-3 database accesses per call.
We already tested some "HelloWorld" with ActionScript and WCF using BasicHttp(s)Binding.
But because we need session handling we were thinking about taking using the wsHttpBinding binding because it can provide us WCF Sessions.
So far so good, but then I stumbled upon the fact that it should
However:
I find that in my Oreilly WCF Services 3rd edition book (Page 177) it is written
and even Microsoft is writing to be careful using that:
http://msdn.microsoft.com/en-us/magazine/cc163590.aspx
"A service configured for private sessions cannot typically support more than a few dozen (or perhaps up to a few hundred) outstanding clients due to the cost associated with each such dedicated service instance."
So because we need to identify the state with each client, we could of course implement our own "Session Handling" on top of stateless HttpBindingBinding, and make a call to that SessionHandling class each time when my WCF methods get called, but I am reluctant to do anything like that, it looks to me like thousands of people should already have faced the same problem.
So, my question now is:
Do you think wsHttpBinding on my server could handle the payload?
How "bad" is it really to go with wsHttpBinding on WCF? Does anybody already have experience with this? Can I use it? What would you use?
Final Remarks:
I am not limited to WCF if we dont like it, we just shall do an evaluation.
From the companies point of view it would also be fine to go for a protobuf-RPC or XML-RPC solution over TCP and the ActionScript clients implementing that. (just examples!) So no need for hosting WCF in IIS on the server as long as the coding part is comfortable (enough) for the programmers on both sides and the ADMINISTRATION on the deployed server is not too much either. With just making some TCP-ports based communication I am a bit afraid what it would mean for the administration in regards to firewall and stuff. Payload is not an issue, client processing power is also not an issue. The only thing I am concerned about is scalability of the server and security.
Thanks in advance for any suggestions!
I would not be concerned with scalability. You can always add a server or two to your farm in case of issues.
I would rather be concerned with your architecture and the need to store anything in session - are you sure about that?
Note that you don't need ws binding to support sessions, basic binding supports sessions as well.
This might look a question where you can read the answer on MSDN, but I still want to ask about the scenario, as I want to solve the business problem.
I have a service hosted on a server, and a client makes service calls. It currently uses netTCP binding. Everything works fine when the service is available, when the server is up and running. Now, I need to handle the server down scenario. I use the local cache file on the client to serve the client requests in case of server down scenario. Now I want to cache all the requests made while server down and want to make service calls once server is up and running.
I am thinking about using the netMsmqBinding, because all I've read suggests that it works well in the disconnected scenario.
Q.1 Can I use the netMsmq to handle this scenario?
Q.2 If not then what could be another approach with which I can follow to solve this problem?
Q.3 Can I use WS-Discovery in case of server down to find that the client calls won't be able to contact the service?
EDIT : The scenario is Client-Server. But i do need to give response on every call to the client. The client is also developed and maintained by me only so i am in a good position to implement the best suitable solution.
Please guide me as I'm not too good with WCF.
Yes, you can use netMsmqBinding for this purpose. We are doing that for services running over a satellite link that can be down often.
One important limitation you need to take into account is that all calls must be one way, being a queue-based transport. If you need to get the results of a request, you'll have to provide a separate response mechanism (it can be a similar queue in the opposite direction)
Ad question 1: using MSMQ is excellent for a scenario where the service may not always be up and running. Note that the server that hosts the message queue must be up and reachable to receive the messages. However, you haven't told us anything else about your scenario, particularly why you currently have NetTCP. The reason that's important, is because there are some things you can not do with MSMQ, for example duplex communication won't work out of the box.
Ad question 2: an alternative may be to implement logic in the client (it's unclear from the question if you're the owner of the client software) to have a local queue and retry messages later if a service is (temporarily) offline. I guess you may even have a proxy MSMQ service on the client, relaying the messages to the main service once it's up.
Ad question 3: yes, you can use Discovery for this. The service will have to announce to the clients when it goes online or offline. The simplest example is using the UdpAnnouncementEndpoint. In the clients you can use the AnnouncementService class to listen to the service coming online or offline, and keep a local list of available services. Alternatively (for example when UDP broadcasts aren't feasible) you can create a discovery proxy service at a well known location that listens to announcements, which the clients can access for instant-knowledge on whether the service they need is online
I'm looking at a few different approaches to a problem: Client requests work, some stuff gets done, and a result (ok/error) is returned.
A .NET web service definitely seems like the way to go, my only issue is that the "stuff" will involve building up and tearing down a session for each request. Does abstracting the "stuff" out to an app (which would keep a single session active, and process the request from the web service) seem like the right way to go? (and if so, what communication method)
The work time is negligible, my concern is the hammering the transaction servers in question will probably get if I create/drop a session for each job.
Is some form of IPC or socket based communication a feasible solution here?
Thoughts/comments/experiences much appreciated.
Edit:
After a bit more research, it seems like hosting a WCF service in a Windows Service is probably a better way to go...
well, here we go:
Do not setup a new communication channel for every request. CLient side / Server side use WCF implementations and make sure the client does not get a new procxy for every call ;) Especially with HTTPS channel setup is expensive.
Do not call the service ;) SImple as that - make as few calls as POSSIBLE. Have a coarse interface optimized into not making many calls. Allow batching of results. For example "GetInvoice" could be "GetDocuments" - returning not only an invoice, and taking multiple ID's, returning multiple documents. A list of 30 invoices turns from 30 calls into 1. Otherwise a messaging bus approach may work - I use one like that and transport up to 1024 messages per call.
Try not to use a web service if you have a ton of load. Use WCF and NetTcp binding (which is binary and not using HTTP, so it is not a web service). THe HTTP overhead is tremendous - look up google, someone made a test and we talk net tcp sometimes being about 900 TIMES faster. Especially if you do little work and have lost of users this may increase scalability. Negative - you are only WCF compliant on the client side, no "Web Service" anymore.
This is pretty much all you can do. With different losses, but this is how to get performance up ;) If you talk of purely IPC consider using named pipes, not even net.tcp - named pipes are very efficient (using shared memory between the processes), but limited to the same machine in the .NET implementation.
I have 50+ kiosk style computers that I want to be able to get a status update, from a single computer, on demand as opposed to an interval. These computers are on a LAN in respect to the computer requesting the status.
I researched WCF however it looks like I'll need IIS installed and I would rather not install IIS on 50+ Windows XP boxes -- so I think that eliminates using a webservice unless it's possible to have a WinForm host a webservice?
I also researched using System.Net.Sockets and even got a barely functional prototype going however I feel I'm not skilled enough to make it a solid and reliable system. Given this path, I would need to learn more about socket programming and threading.
These boxes are running .NET 3.5 SP1, so I have complete flexibility in the .NET version however I'd like to stick to C#.
What is the best way to implement this? Should I just bite the bullet and learn Sockets more or does .NET have a better way of handling this?
edit:
I was going to go with a two way communication until I realized that all I needed was a one way communication.
edit 2:
I was avoiding the traditional server/client and going with an inverse because I wanted to avoid consuming too much bandwidth and wasn't sure what kind of overhead I was talking about. I was also hoping to have more control of the individual kiosks. After looking at it, I think I can still have that with WCF and connect by IP (which I wasn't aware I could connect by IP, I was thinking I would have to add 50 webservices or something).
WCF does not have to be hosted within IIS, it can be hosted within your Winform, as a console application or as windows service.
You can have each computer host its service within the winform, and write a program in your own computer to call each computer's service to get the status information.
Another way of doing it is to host one service in your own computer, and make the 50+ computers to call the service once their status were updated, you can use a database for the service to persist the status data of each node within the network. This option is easier to maintain and scalable.
P.S.
WCF aims to replace .net remoting, the alternatives can be net.tcp binding or net.pipe
Unless you have plans to scale this to several thousand clients I don't think WCF performance will even be a fringe issue. You can easily host WCF services from windows services or Winforms applications, and you'll find getting something working with WCF will be fairly simple once you get the key concepts.
I've deployed something similar with around 100-150 clients with great success.
There's plenty of resources out on the web to get you started - here's one to get you going:
http://msdn.microsoft.com/en-us/library/aa480190.aspx
Whether you use a web service or WCF on your central server, you only need to install and configure IIS on the server (and not on the 50+ clients).
What you're trying to do is a little unclear from the question, but if the clients need to call the server (to get a server status, for example), then they just call a method on the webservice running on the server.
If instead you need to have the server call the clients from time to time, then you'll need to have each client call a sign-in method on the server webservice each time the client starts up. The sign-in method would take a delegate method from the client as a parameter. The server would then call this delegate when it needed information from the client.
Setting up each client with its own web service would represent an inversion of the traditional (one server, multiple clients) client/server architecture, and as you've already noted this would be impractical.
Do not use remoting.
If you want robustness and scalability you end up ruling out everything but what are essentially stateless remote procedure calls. Since this is exactly the capability of web services, and web services are simpler and easier to build, remoting is an essentially pointless technology.
Callbacks with remote delegates are on the performance/reliability forbidden list, so if you were thinking of using remoting for that, think again.
Use web services.
I know you don't want to be polling, but I don't think you need to. Since you say all your units are on a single network segment then I suggest UDP for broadcast change notifications, essentially setting a dirty flag, and allowing the application to (re-)fetch on demand. It's still not reliable but it's easy and very fast because it's broadcast.
As others have said you don't need IIS, you can self-host. See ServiceHost class for details on how to do this.
I'd suggest using .NET Remoting. It's quite easy to implement and doesn't require anything else.
For me its is better to learn networking.. or the manual way of socket communication.. web services are mush slower because it contains metadata..
your clients and the servers can transform to multithreaded application. just imitate the request and response architecture. it is much easy to implement a network application like this..
If you just need a status update, you can use much simpler solution, such as simple tcp server/client messaging or like orrsella said, remoting. WCF is kinda overkill here.
One note though, if all your 50+ kiosk is connected via internet, then you might need use VPN or have an open port on each kiosk(which is a security risk) so that your server can retrieve status update from each kiosk.
We had a similiar situation, but the status is send to our server periodically, so we only have 1 port to protect/secure. The frequency of the update is configurable as to accomodate slower clients.
As someone who implemented something like this with over 500+ clients and growing:
Message Queing is the way to go.
We have gone from an internal developed TCP server and client to WCF polling and ended up with Message queing. It's the only guaranteed way to get data to and from clients and servers over the internet. As a bonus, many of these solutions have an extensive framework makeing it trivial to implement publish-subscribe, Send-one-way, point-to-point sending, Request-reply. Some of these are possible with WCF but it will involve crying, shouting, whimpering and long nights not to mention gallons of coffee.
A couple of important remarks:
Letting a process poll the clients instead of the other way around = Bad idea.. it is not scalable at all and you will soon be running in to trouble when the process is take too long to complete.. Not to mention having to handle all the ip addresses ( do you have access to all clients on the required ports ? What happpens when the ip changes etc..)
what we have done: The clients sends status updates to a central message queue on a regular interval ( you can easily implement live updates in the UI), it also listens on it's own queue for a GetStatusRequest message. if it receives this, it answers ( has a timeout).. this way, we can see overal status of all clients at all times and get a specific status of a specific client when needed.
Concerning bandwidth: kiosk usually show images/video etc.. 1Kb or less status messages will not be the big overhead.
I CANNOT stress enough that the current design you present will have a very intensive development cycle AND will not scale or extend well ( trust me, we have learned this lesson). Next to this, building a good client/server protocol for this type of stuff is a hard job that will be totally useless afterwards if you make a design error ( migrating a protocol is not easy)
We have built our solution ontop of ActiveMQ ( using NMS library c#) and are currently extending Simple Service Bus for our internal workings.
We only use WCF for the communication between our winforms app and the centralized service(s)