When using WCF for 2 computer to communicate over the network, i am executing a method on the remote server, the time the operation can take is not known it can take from 1 second to a day or more, so i want to set the ((IClientChannel)pipeProxy).OperationTimeout property to a high value, but is this the way to go or is this a dirty way of programming, because a connection is active for the whole time (it is all on a relatively stable lan network).
I wouldn't do it like that. Such a long timeout is likely to cause issues.
I would split the operation into two: One call from client to server which starts the operation, and then a callback from the server to the client to say that it's finished. The callback would of course include any result information (success, failure etc).
For something which takes such a long time, you might also want to introduce a "keep alive" mechanism where the client periodically calls the server to check that it is still responding.
If you have a very long timeout, it makes it hard to know if something has actually gone wrong. But if you split the operation into two, it makes it impossible to know if something has gone wrong unless you poll occasionally with a keep-alive (or more accurately, "are you alive?") style message.
Alternatively, you could have the server call back occasionally with a progress message, but that's a bit harder to manage than having the client polling the server occasionally (because the client would have to track the last time the server called it back to determine if the server had stopped responding).
Related
I'm just making my server disconnect sockets that send no data after a certain amount of time, like 20 seconds.
I wonder whether working with timers is good for that or is there something special for that in socket library? Working with timers on the server for every socket makes it heavy.
Is it unsafe to make the client program handle that? For example every client disconnects after not sending data for a while.
This should be very easy to implement as part of your keep-alive checking. Unless you're completely ignoring the issue of dropped connections, you probably have a keep-alive system that periodically sends a message client->server and vice versa if there's been no communication. It should be trivial to add a simple "last data received time" value to the socket state, and then close the socket if it gets too far from DateTime.Now.
But the more important question is "Why?". The best solution depends on what your reasons for this are in the first place. Do you want to make the server usable to more clients by dumping those that aren't sending data? You'll probably make everything worse, since the timeouts for TCP sockets are more like 2-4 minutes, so when you disconnect the client after 20s and it reconnects, it will now be using two server-side ports, instead of one. Oops.
As for your comment on the deleted answer, and connection without data send and receive i think it gonna waste your threads points closer to your real problem - the amount of connection your server has should have no relation to how many threads the server uses to service those connections. So the only thing an open connection would "waste" is basically a bit of memory (depending on the amount of memory you need per connection, plus the socket cost with its buffers) and a TCP port. This can be an issue in some applications, but if you ever get to that level of "load", you can probably congratulate yourself already. You will much more likely run out of other resources before getting anywhere close to the port limits (assumption based on the fact that it sounds like you're making an MMO game). If you do really run into issues with those, you probably want to drop TCP anyway and rewrite everything in UDP (or preferrably, some ready solution on top of UDP).
The Client-Server model describes how a client should connect to a server and perform requests.
What I would recommend to you would be to connect to the server, and when you finish retrieving all the date you need, close the socket (on the client side).
The server will eventually find the socket's resources released, but you can check for the socket's Connected property to release the resources sooner.
When client disconnect to server then server can get disconnect event. it look like
socket.on('disconnect', function () {
// Disconnect event handling
});
On client side you also findout disconnect event .. in which you need to reconnect the server.
I have a small client/server application. I was using a hand-coded TCP connection to allow the client to control the server, but now I've converted it to WCF. This saved me a whole bunch of code, but it also gave me a whole new set of problems to fix...
The latest problem is that after a while, the server disconnects the client. I do not want this to ever happen, under any circumstances. Currently the client gets about a quarter of the way through its run, and then explodes with fire because the server has dropped the connection. I need to stop this happening.
I was able to write a trivial WCF client/server pair that replicates the problem. It seems that if the client calls a method, waits 15 minutes, and then calls a second method, the second call throws an exception babbling something about the socket having been closed. If I reduce the delay, everything works fine.
I read in another answer somewhere that setting ReceiveTimeout should fix this. However, when I tried it, this only fixes the problem under .NET; when running under Mono, it still breaks. Since Mono is the actual target platform, this isn't very helpful.
(Think about SSH - you would not want an SSH server to disconnect you just because you didn't type anything for a while. Perhaps you issued a long-running shell command or something... Just because the server hasn't received any data from you doesn't mean nothing is happening! It certainly doesn't mean your connection should get dropped...)
All code is C#. The server is a self-hosting console app. The client is also a console app. All configuration is in code. Binding is NetTcpBinding with default settings.
What can I do to allow the client to run to completion successfully?
I have a few ideas, but none of them are pretty:
Manually send heartbeat messages. (Yuck!)
Detect disconnection and automatically reconnect? (Again, yuck.)
Turn on "reliable mode". (I'm guessing that since the server deliberately ends the session, this won't help.)
Create one connection per method call. (That's going to be quite a lot of code...)
Stop using WCF?
In the end I "fixed" this by having the client make a new connection for every single command. This works acceptably because the client doesn't send commands all that often. It's annoying having to write the connect/disconnect code a dozen times though...
So I realize this is a pretty loaded question, but here's what I'm trying to gauge.
I've got a server that accepts reliable-session tcp connections via WCF and opens a callbackchannel to the client. 99.999% of the time, it's just connected, waiting for the server to issue a callback (not actively processing anything, just maintaining the connection).
What kind of per machine bottlenecks will I hit? I've already handled WCF <servicethrottling /> attributes on the binding, but just from a load/max connection/"anything else I'm missing" standpoint, I'm trying to get a sense of how many clients can be served per Azure Small Instance given that by and large, these guys will be sitting idly by, just waiting.
If you're opening outbound connections, you'll want to consider increasing
ServicePointManager.DefaultConnectionLimit
in your role OnStart() code. I can't recall the default, but I believe it's 12.
While you're at it, might as well consider setting
ServicePointManager.UseNagleAlgorithm
to false if you push lots of short messages (under, oh, 1400 bytes). Otherwise the messages get buffered up to a half-second. I gave a bit more detail on Nagle in this SO answer.
I want to create a remote webservice for an application that is now avaliable only localy. This application controlls three devices (each is controlled separately) connected on serial port. The problem is that I don't know how to take care of passing back information that a device return requested data. For example - I send move command to the motion device (which is very slow and can take a minute or more). Can I just set a big timeout on the client side (and server side) and return for example a true/false if operation is completed or is this a bad idea? Is SOAP with big timeouts ok?
And the other question is if Mono on Linux (Ubuntu 9.10, Mono 2.4) is stable enought for making a web service or should I chose Java or some other language?
I'm open for recommendations.
Thanks for your help!
Using big timeouts is not a good idea. It wastes resources on both the server and the client and you will not be able to detect a "true" timeout condition, when the server is unavailable for example, before the allocated timeout expires.
You really have two options. The first is to use polling. Return immediately from the motion request command, acknowledging the reception of the command (and not the completion of it). Then send requests in regular intervals, asking whether the command is completed or not.
The other alternative requires the client to be able to register a callback endpoint, which the server will call when the motion completes. This makes the whole process asynchronous, but requires the client to be able to operate in server mode. This is very easy to do with WCF - I don't know however if this functionality is available in Mono.
Not directly related to your question..., but consider com0com and its friends hub4com and com2tcp.
I've got a project where I'm hitting a bunch of custom Windows Performance Counters on multiple servers and aggregating them into a database. If a server is down, I want to skip it, and just continue on with my day.
Currently I'm checking to see if a server is live by doing a DirectoryInfo on a share that I've got to look at later in the process anyways, then checking the .Exists property.This is my current code snippet for testing:
DirectoryInfo di = new DirectoryInfo(machine.Share_Path);
if (!di.Exists)
{
log.Warn("Could not access " + machine.Name + "! Maybe its down?");
continue; // Skips to the next server in my loop where this snippet exists.
}
This works, but its pretty slow. It takes about 68 seconds on average for the di.Exists bit to finish its work, and I ideally need to know within a second whether or not a server is accessible. Pinging also isn't an option since a server can be pingable but not "live" in our environment.
I'm still kind of fresh to the .NET world, so I'm open to any advice people can offer.
Thanks in advance.
-Weegee
Ping First, Ask Questions Later
Why not ping first, and then do the di.Exists if you get a response?
That would allow you to fail early in the case that is not reachable, and not waste the time for machines that are down hard.
I have, in fact, used this method successfully before.
MSDN Ping Documentation
Paralellize
Another option you have is to paralellize the checking, and action on the servers as they are known to be available.
You could use the Paralell.ForEach() method, and use a thread-safe queue along with a simple consumer thread to do the required action. Combined with the checking method above, this could alleviate almost all of your bottleneck on the up/down checking.
Knock on the Door
Yet another method would be to ckeck if the required remote service is running (either by hitting its port directly or by querying it with WMI).
Since WMI is almost always running when a machine is up, your connection should be very quick to either succeed or fail.
The only "quick" way I think to see if it's up without relying on ping would be to create a socket, and see if you can actually connect to the port of the service you're trying to reach.
This would be the equivalent of telnet servername 135 to see if it's up.
Specifically...
Create a .NET TCP socket client (System.Net.Sockets.TcpClient)
Call BeginConnect() as an asynchronous operation, to connect to the server in question on one of the RPC ports that your directory exists code would use anyway (TCP 135, 139, or 445).
If you don't hear back from it within X milliseconds, call Close() to cancel the connection.
Disclaimer: I have no idea what effect this would have on any threat/firewall protection that may see this type of Connect / Disconnect with no data sent activity as a threat.
Opening Socket to a specific port usually does the trick. If you really want it to be fast, be sure to set the NoDelay property on the new socket (Nagle algorithm) so there is no buffering.
Fast will largely depend on latency but this is probably the fastest way I know to connect to an endpoint. It's pretty simple to parallelize using the async methods. How fast you can check will largely depend on your network topology but in tests for 1000 servers (latency between 0-75ms) I've been able to get connectivity state in ~30 seconds. Not scientific data at all but should give you the idea.
Also, don't ever do this through UNC file shares because if the server no longer exists you will have a lot of dangling connections that take forever to timeout. So if you have a lot of servers with invalid DNS records and you try to poll them you will bring Windows down completely over time. Things like File.Exists and any file access will cause this.
The "Full-Blown" option would be to install a monitoring tool like SCOM (System Center Operations Manager), this has an SDK you can use to query SCOM for (performance) and maintenance information avout machines being monitored. Might be a bridge to far though....
Telnet is another option. Try telnetting to the target machine to see if it responds.
Create a small Windows Service that you install on your target machine, have the sys admin stop it when they perform maintenance on the target machine (just use batch file to net stop / net start the service)