In a Windows Service I implemented an HttpListener that will handle incoming HTTP Requests to a certain port, parse the query string, insert it in database and send a confirmation response. All works well and i was quite pleased with my solution. However, the clients said that they were a bit skeptical and asked if the same could have been done via a webpage. Like having an HTTPHandler listen to a certain port. Got me thinking. What would you do in my situation?
Go with the HttpListener/Windows Service or HTTPHandler/.aspx?
Thank you very much!
Is there any reason why you don't want to use a web server? We've implemented our own Http serving services because they are fairly unusual in the way they process the requests and would prove taxing on a normally configured IIS instance.
In your situation, this doesn't appear to be the case, so yes, I find myself wondering why you didn't go the webserver route either.
EDIT
Is there any other web facing part of your application? If not, I would concur that #Mr Disappoinment's reasoning is sound. You're only exposing what you need, which is considerably less attack surface than an IIS instance.
I would use something through IIS, simply because I think my clients' IT staffs would require a pretty significant argument for me to be telling them to install custom services on their servers. I don't know enough about the threading behavior of the HttpListener (does it use thread pools? max number of threads? queueing once a max has been hit?) to say for sure, but I'd imagine that your client has similar concerns.
Related
I am trying to use web sockets to allow two Windows services on different machines to pass data back and forth. Almost all the examples or information I have found are about using web sockets for Client/Server Side communication. I am having trouble figuring out how to set this up. I have considered using WebSocketHost as apart of Microsoft.ServiceModel.WebSockets, but then I am unsure how to bind it to a local port and not a URL.
Does any one have any suggestions
Thanks
I am trying to use web sockets to allow two Windows services on different machines to pass data back and forth.
You can open sockets on both machines using WebSockets as you found. The examples mention clients and servers because this is the typical usage, however the API really doesn't care. As long as each side has a listener and a sender they can communicate.
However I would like to mention that this isn't as simple as it sounds because both machines aren't always available. Sometimes one or the other is busy or the network is blocked or something else is going on, or the listener is too busy to respond right away, so you're going to end up needing some sort of queuing on both sides.
If you're doing a process based operation where one side tells the other "I want X" and it's a big operation like producing a document, I've found it much more resilient to build a queue in a database and toss the request in there, then wait for the other side to update the record to say it's done.
If they're smaller, faster requests, MSMQ would be more appropriate if you have it available.
However back to your original question, if you want to use it, any of the client-server examples should work just fine. The API doesn't care.
You can use SignalR Self-Host you really don't want to create your own WebSockets framework since this this will take a long time.
Here is a link on how to start a OWIN server in Windows services.
Hosting WebAPI using OWIN in a windows service
And how to set signalR in self host
Tutorial: SignalR Self-Host
You can accomplish this with Memory Mapped Files.
Inter-Process Communication with Memory-Mapped Files
I am in the need of creating a C# program that will run on couple of our local Windows client machines. These 'client' programs will have to take commands from a 'admin' program run on another machine.
The commands could be to reboot the client computers, return some local information about IP address etc back to the 'admin' program.
But how to accomplish this? I know a little about WCF but is that the right way to go?
If I go with WCF I will then have to make the client programs run a service method, like every second, to check for new commands. With sockets I establish a 'direct' connection and the client just waits for a command to receive - isn't that correct understood?
Which way would be the right way for me to go?
We are talking about ~10 clients and I want a maximum delay (send command - receive info back) of 1 second.
Any hints would also be appreciated.
Best regards
Duplex WCF server. Basically, the clients all connect into the server (so only 1 server), and the server uses its duplex channel to call back to the clients whenever it needs to. No polling, scales well, etc. The most headache you'll need to deal with is to set a long timeout in case that you don't send anything for a while so that the channels time out.
WCF will end up being much simpler in the end.
A couple of links:
http://msdn.microsoft.com/en-us/library/ms731064.aspx
and
http://www.codeproject.com/Articles/491844/A-Beginners-Guide-to-Duplex-WCF
I hope those help.
You can make WCF clients act as servers and with command & control program connect to them that is no problem. Go for WCF if you don't want to mess with ugly stuff that sockets can bring. WCF can be configured nicely in app.config, and you can make it really self hosted command line application even so no need for IIS server. Configuration will be then resuable and easier to maintain.
You could use .NET Remoting which can provide a "push" backchannel from the server to the client (using "callbacks"). It does not need a 2nd TCP connection in the other direction so you don't need to mess with client's firewalls and routers.
Remoting is considered kind of obsolete but it has its places.
In any case I would not use a WCF polling technique. That leads to bad latency and a DDOS situation for the server.
If you can make the clients open a port then hosting a WCF service there is probably the best idea.
I would recommend a socket implementation as in the long run it probably gives you greater flexibility. You could create this from scratch yourself using the socket namespace. As an alternative you could use an off the shelf network library solution. Checkout lidgren and NetworkComms.Net.
Disclaimer: I'm a developer for NetworkComms.Net.
With the small number of machines and modest performance requirements you mentioned, I think WCF would end up being easier than sockets.
You might look into Duplex WCF. I've never used it, and WCF has given me headaches in the past any time I've needed anything unusual, but it's for the sort of problem you're talking about.
If all the machines are on one network, here's one creative alternative loosely inspired by message queues: you could use a database table as a place where messages appear and get read by clients at their leisure. The clients could just query it and say: get me all messages where MessageID > LastReceivedMessageID.
The downsides of that last approach is that (a) you're still doing polling although your database server should be able to handle it and (b) if you might ever need this outside of your network, you would need a VPN or a new solution.
you could use MSMQ... couldn't be easier to implement
http://msdn.microsoft.com/en-us/library/windows/desktop/ms711472(v=vs.85).aspx
I use MSMQ for a number of similar applications. Works perfectly.
I am planning a SaaS system, to be written in C#, ASP.NET using WCF that has two separate components:
On a static IP web server in the cloud will be a web app, common to all clients.
Inside each client's office will be another app, installed on a server with IIS.
The site app will obviously be able to connect to the web services published on the web site. But here's the rub - I also want the web app to be able to initiate a connection to the site app... and the on-site server may not necessarily have a static IP. I can't control this, because we may have hundreds of clients at some point in the future, and we cannot limit our saleability by insisting that the customer has a server with fixed IP.
So, how to do this?
I could have the site apps "checking in" with the web every minute or so, to give the web app the possibility of responding with a "while you're here, please do x,y,z..." but that seems very inelegant. Also, if we're talking about hundreds of clients, I don't want to be bombarding my web server with all these "hi there!" messages if they're not actually required.
Is there a better way?
WCF? Here we go:
Use a message based approach (exchange message, no stateful method calls).
Clients connect to the server. Establish a HTTP-based TWO WAY CONNECTION. This way the server can call back to connected clients. This is standard WCF stuff and works well through NAT with version 4 of the .NET framework.
Voila. In case of a disconnect the client can re-connect, re-identify himself and gets the pending messages.
IIRC "push communication" is done by letting the client do a HTTP Request with an indefinate timeout. Then the server responds when he has something to say. After the respons the client immediately makes a new request.
It works out the same way like the server is making the connection and takes far less resources than polling.
Dynamic DNS is one possibility, but depends on your clients/customers.
If the site app is created by you, it only has to contact the web server when its address has changed (or when the site server/web app is restarted). Still, a keep-alive heart beat of, say, every 30 min. to 1 hour isn't a bad idea.
Edit: I think SNMP services may provide the answer but I'm not a networking expert. You'll have to do some digging or ask a separate question on stackoverflow.
What would you say about Comet technology?
Sounds like you'll definitely need some sort of registry on the server, then it could attempt to call out to the client apps if it needs work doing.
Generally it is client apps that check in with the server every X seconds - this is how Selenium grid works anyway. With a central hub with which clients register. When the hub receives a request to run some tests it passes the jobs out to the clients to perform.
You may not need the "checking in". The server could just attempt to call out to a registered client app until it finds one that is available.This way only the server would need a static address (could use a DNS name instead of an IP to make it more robust).
Also have a look at XMPP PubSub. This could be a more robust and standardised way to handle this.
In the end I decided to go with NetTcpBinding, for reasons best given by #Allon Guralnek here. It's worth clicking through and reading what he has to say...
I'm looking at a few different approaches to a problem: Client requests work, some stuff gets done, and a result (ok/error) is returned.
A .NET web service definitely seems like the way to go, my only issue is that the "stuff" will involve building up and tearing down a session for each request. Does abstracting the "stuff" out to an app (which would keep a single session active, and process the request from the web service) seem like the right way to go? (and if so, what communication method)
The work time is negligible, my concern is the hammering the transaction servers in question will probably get if I create/drop a session for each job.
Is some form of IPC or socket based communication a feasible solution here?
Thoughts/comments/experiences much appreciated.
Edit:
After a bit more research, it seems like hosting a WCF service in a Windows Service is probably a better way to go...
well, here we go:
Do not setup a new communication channel for every request. CLient side / Server side use WCF implementations and make sure the client does not get a new procxy for every call ;) Especially with HTTPS channel setup is expensive.
Do not call the service ;) SImple as that - make as few calls as POSSIBLE. Have a coarse interface optimized into not making many calls. Allow batching of results. For example "GetInvoice" could be "GetDocuments" - returning not only an invoice, and taking multiple ID's, returning multiple documents. A list of 30 invoices turns from 30 calls into 1. Otherwise a messaging bus approach may work - I use one like that and transport up to 1024 messages per call.
Try not to use a web service if you have a ton of load. Use WCF and NetTcp binding (which is binary and not using HTTP, so it is not a web service). THe HTTP overhead is tremendous - look up google, someone made a test and we talk net tcp sometimes being about 900 TIMES faster. Especially if you do little work and have lost of users this may increase scalability. Negative - you are only WCF compliant on the client side, no "Web Service" anymore.
This is pretty much all you can do. With different losses, but this is how to get performance up ;) If you talk of purely IPC consider using named pipes, not even net.tcp - named pipes are very efficient (using shared memory between the processes), but limited to the same machine in the .NET implementation.
I have 50+ kiosk style computers that I want to be able to get a status update, from a single computer, on demand as opposed to an interval. These computers are on a LAN in respect to the computer requesting the status.
I researched WCF however it looks like I'll need IIS installed and I would rather not install IIS on 50+ Windows XP boxes -- so I think that eliminates using a webservice unless it's possible to have a WinForm host a webservice?
I also researched using System.Net.Sockets and even got a barely functional prototype going however I feel I'm not skilled enough to make it a solid and reliable system. Given this path, I would need to learn more about socket programming and threading.
These boxes are running .NET 3.5 SP1, so I have complete flexibility in the .NET version however I'd like to stick to C#.
What is the best way to implement this? Should I just bite the bullet and learn Sockets more or does .NET have a better way of handling this?
edit:
I was going to go with a two way communication until I realized that all I needed was a one way communication.
edit 2:
I was avoiding the traditional server/client and going with an inverse because I wanted to avoid consuming too much bandwidth and wasn't sure what kind of overhead I was talking about. I was also hoping to have more control of the individual kiosks. After looking at it, I think I can still have that with WCF and connect by IP (which I wasn't aware I could connect by IP, I was thinking I would have to add 50 webservices or something).
WCF does not have to be hosted within IIS, it can be hosted within your Winform, as a console application or as windows service.
You can have each computer host its service within the winform, and write a program in your own computer to call each computer's service to get the status information.
Another way of doing it is to host one service in your own computer, and make the 50+ computers to call the service once their status were updated, you can use a database for the service to persist the status data of each node within the network. This option is easier to maintain and scalable.
P.S.
WCF aims to replace .net remoting, the alternatives can be net.tcp binding or net.pipe
Unless you have plans to scale this to several thousand clients I don't think WCF performance will even be a fringe issue. You can easily host WCF services from windows services or Winforms applications, and you'll find getting something working with WCF will be fairly simple once you get the key concepts.
I've deployed something similar with around 100-150 clients with great success.
There's plenty of resources out on the web to get you started - here's one to get you going:
http://msdn.microsoft.com/en-us/library/aa480190.aspx
Whether you use a web service or WCF on your central server, you only need to install and configure IIS on the server (and not on the 50+ clients).
What you're trying to do is a little unclear from the question, but if the clients need to call the server (to get a server status, for example), then they just call a method on the webservice running on the server.
If instead you need to have the server call the clients from time to time, then you'll need to have each client call a sign-in method on the server webservice each time the client starts up. The sign-in method would take a delegate method from the client as a parameter. The server would then call this delegate when it needed information from the client.
Setting up each client with its own web service would represent an inversion of the traditional (one server, multiple clients) client/server architecture, and as you've already noted this would be impractical.
Do not use remoting.
If you want robustness and scalability you end up ruling out everything but what are essentially stateless remote procedure calls. Since this is exactly the capability of web services, and web services are simpler and easier to build, remoting is an essentially pointless technology.
Callbacks with remote delegates are on the performance/reliability forbidden list, so if you were thinking of using remoting for that, think again.
Use web services.
I know you don't want to be polling, but I don't think you need to. Since you say all your units are on a single network segment then I suggest UDP for broadcast change notifications, essentially setting a dirty flag, and allowing the application to (re-)fetch on demand. It's still not reliable but it's easy and very fast because it's broadcast.
As others have said you don't need IIS, you can self-host. See ServiceHost class for details on how to do this.
I'd suggest using .NET Remoting. It's quite easy to implement and doesn't require anything else.
For me its is better to learn networking.. or the manual way of socket communication.. web services are mush slower because it contains metadata..
your clients and the servers can transform to multithreaded application. just imitate the request and response architecture. it is much easy to implement a network application like this..
If you just need a status update, you can use much simpler solution, such as simple tcp server/client messaging or like orrsella said, remoting. WCF is kinda overkill here.
One note though, if all your 50+ kiosk is connected via internet, then you might need use VPN or have an open port on each kiosk(which is a security risk) so that your server can retrieve status update from each kiosk.
We had a similiar situation, but the status is send to our server periodically, so we only have 1 port to protect/secure. The frequency of the update is configurable as to accomodate slower clients.
As someone who implemented something like this with over 500+ clients and growing:
Message Queing is the way to go.
We have gone from an internal developed TCP server and client to WCF polling and ended up with Message queing. It's the only guaranteed way to get data to and from clients and servers over the internet. As a bonus, many of these solutions have an extensive framework makeing it trivial to implement publish-subscribe, Send-one-way, point-to-point sending, Request-reply. Some of these are possible with WCF but it will involve crying, shouting, whimpering and long nights not to mention gallons of coffee.
A couple of important remarks:
Letting a process poll the clients instead of the other way around = Bad idea.. it is not scalable at all and you will soon be running in to trouble when the process is take too long to complete.. Not to mention having to handle all the ip addresses ( do you have access to all clients on the required ports ? What happpens when the ip changes etc..)
what we have done: The clients sends status updates to a central message queue on a regular interval ( you can easily implement live updates in the UI), it also listens on it's own queue for a GetStatusRequest message. if it receives this, it answers ( has a timeout).. this way, we can see overal status of all clients at all times and get a specific status of a specific client when needed.
Concerning bandwidth: kiosk usually show images/video etc.. 1Kb or less status messages will not be the big overhead.
I CANNOT stress enough that the current design you present will have a very intensive development cycle AND will not scale or extend well ( trust me, we have learned this lesson). Next to this, building a good client/server protocol for this type of stuff is a hard job that will be totally useless afterwards if you make a design error ( migrating a protocol is not easy)
We have built our solution ontop of ActiveMQ ( using NMS library c#) and are currently extending Simple Service Bus for our internal workings.
We only use WCF for the communication between our winforms app and the centralized service(s)