I'm experiencing a delay issue with Ping.Send in C# .Net 4.5 running under Mono 3.2.8. My code looks like this:
using(var sw = new StreamWriter("/ping.txt"))
{
var ping = new Ping();
PingReply reply;
sw.WriteLine("Pre ping: {0}", DateTime.Now);
// Ping local machine
reply = ping.Send("172.16.1.100", 60);
sw.WriteLine("Post ping: {0}", DateTime.Now);
if (reply != null && reply.Status == IPStatus.Success)
{
sw.WriteLine("Success! RTT: {0}", reply.RoundtripTime);
}
sw.WriteLine("Pre ping: {0}", DateTime.Now);
// Ping Google
reply = ping.Send("216.58.220.110", 60);
sw.WriteLine("Post ping: {0}", DateTime.Now);
if (reply != null && reply.Status == IPStatus.Success)
{
sw.WriteLine("Success! RTT: {0}", reply.RoundtripTime);
}
}
The output from running the above code under Mono on Linux is:
Pre ping: 03/17/2015 15:43:21
Post ping: 03/17/2015 15:43:41
Success! RTT: 2
Pre ping: 03/17/2015 15:43:41
Post ping: 03/17/2015 15:44:01
Success! RTT: 46
You can see that between the "Pre" and "Post" timestamps, there is a delay of 20 seconds (this is consistent, it's always 20 seconds). The machine running Mono is on the same 172.16.1.* network, I threw the Google ping in there for an extra test.
Running the same code locally on my Windows machine produces the following output (no delay on the pings):
Pre ping: 17/03/2015 3:38:21 PM
Post ping: 17/03/2015 3:38:21 PM
Success! RTT: 3
Pre ping: 17/03/2015 3:38:21 PM
Post ping: 17/03/2015 3:38:21 PM
Success! RTT: 46
Any ideas as to what's going on here? I have the need for pinging hundreds of machines, so a delay of 20 seconds between pings isn't acceptable.
UPDATE:
I've tried using the Ping.SendAsync method with the code below:
private void PingAsyncTest()
{
var ipAddresses = new List<String> { "172.16.1.100", "216.58.220.110" };
foreach (var ipAddress in ipAddresses)
{
using (var ping = new Ping())
{
ping.PingCompleted += PingCompleted;
ping.SendAsync(IPAddress.Parse(ipAddress), 1000);
}
}
}
private void PingCompleted(object sender, PingCompletedEventArgs e)
{
if (e.Reply.Status == IPStatus.Success)
{
// Update successful ping in the DB.
}
}
I'm still seeing the 20 second delay between the SendAsync call and when the reply comes into PingCompleted. This is slightly nicer than the original code where the application would wait the 20 seconds before sending off the next ping. This way all pings are sent and received asynchronously, so there is no need to wait 20 seconds for each ping. Still not ideal though.
The way this goes depends very much on how the permissions are set up.
If your application gets enough permissions, it will directly try to send an ICMP request. On the other hand, if it's not allowed to send ICMP, it will run ping executable (trying to find it in /bin/ping, /sbin/ping and /usr/sbin/ping).
First thing you might want to check is which of those actually happens. Does ping execute while you're trying to do the pings? Does it help if you sudo your application?
The default timeout is four seconds, so it shouldn't ever take 20 seconds - you should have gotten a timeout long before that. And you're explicitly passing a timeout of 60 milliseconds.
All this (along with a good look at the code handling pings in Mono) suggests one of those:
The 20s are required for the initial setup of the Ping class itself - querying for capabilities, finding ping etc. This obviously isn't the case, since you're trying two pings and each of them takes this long.
Most of the time is spent outside of the actual ICMP/ping code. The most likely place being for example Dns.GetHostName or Dns.GetHostAddresses. Check both separately from the ping itself.
Some other thread / process is interfering with your own pings. The ICMP socket will get all the ICMP responses, since there's no concept of ports etc. in ICMP.
The last point is also alluding to another issue - if you're trying to ping a lot of different hosts, you really don't want to use Ping, at least not on Linux. Instead, you'll want to ensure your application runs priviledged (enough permissions to do raw ICMP), and handle all the ICMP requests and replies over a single Socket. If you send 100 requests in parallel using Ping.Send, each of those Ping.Sends will have to go through all the replies, not just the one they are expecting. Also, using 60ms as a timeout doesn't sound like a good idea, since the code is using DateTime.Now to check the timeouts, which can have very low timeout resolution.
Instead of sending a request and waiting for a reply, you really want to use asynchronous sockets to send and receive all the time, until you go through all the hosts you want to ping, while checking for the ones where you didn't get a reply in time.
Related
I have a real head scratcher here (for me).
I have the following setup:
Kubernetes Cluster in Azure (linux VMs)
ASP.NET docker image with TCP server
Software simulating TCP clients
RabbitMQ for notifying incoming messages
Peer behaviour:
The client sends its heartbeat every 10 minutes
The server sends a keep-alive every 5 minutes (nginx-ingress kills connections after being idle for ~10 minutes)
I am testing the performance of my new TCP server. The previous one, written in Java, could easily handle the load I am about to explain. For some reason, the new TCP server, written in C#, loses the connection after about 10-15 minutes.
Here is what I do:
Use the simulator to start 500 clients with a ramp-up of 300s
All connections are there established correctly
Most of the time, the first heartbeats and keep-alives are sent and received
After 10+ minutes, I receive 0 bytes from Stream.EndRead() on BOTH ends of the connection.
This is the piece of code that is triggering the error.
var numberOfBytesRead = Stream.EndRead(result);
if (numberOfBytesRead == 0)
{
This.Close("no bytes read").Sync(); //this is where I end up
return;
}
In my logging on the server side, I see lots of disconnected ('no bytes read') lines and a lot of exceptions indicating that RabbitMQ is too busy: None of the specified endpoints were reachable.
My guesses would be that the Azure Load Balancer just bounces the connections, but that does not happen with the Java TCP server. Or that the ASP.NET environment is missing some configuration.
Does anyone know how this is happening, and more important, how to fix this?
--UPDATE #1--
I just used 250 devices and that worked perfectly.
I halved the ramp-up and that was a problem again. So this seems to be a performance issue. A component in my chain is too busy.
--UPDATE #2--
I disabled the publishing to RabbitMQ and it kept working now. Now I have to fix the RabbitMQ performance.
I ended up processing the incoming data in a new Task.
This is my code now:
public void ReceiveAsyncLoop(IAsyncResult? result = null)
{
try
{
if (result != null)
{
var numberOfBytesRead = Stream.EndRead(result);
if (numberOfBytesRead == 0)
{
This.Close("no bytes read").Sync();
return;
}
var newSegment = new ArraySegment<byte>(Buffer.Array!, Buffer.Offset, numberOfBytesRead);
// This.OnDataReceived(newSegment)); <-- previously this
Task.Run(() => This.OnDataReceived(newSegment));
}
Stream.BeginRead(Buffer.Array!, Buffer.Offset, Buffer.Count, ReadingClient.ReceiveAsyncLoop, null);
}
catch (ObjectDisposedException) { /*ILB*/ }
catch (Exception ex)
{
Log.Exception(ex, $"000001: {ex.Message}");
}
}
Now, everything is super fast.
I'm making a tool to test a connection to certain host using a class "PingReply" in .NET. My problem is it takes a while to get a result if the ping result is a fail. It is LAN environment so i can already assume that the connection is failed if it takes more than 100ms. The code below shows a result after 5 seconds, which is 5000ms, if the connection to the host fails. Can i get the faster result even though the connection is failed?
Ping x = new Ping();
PingReply reply = x.Send(IPAddress.Parse("192.168.0.1"));
if (reply.Status == IPStatus.Success)
{
//Do something
}
You can pass a timeout to the Ping.Send() method. Please check out the overloaded members.
Since we can't see your ping object, ill assume you don't know about TIMEOUT. I usually send an async ping, and set the timeout to 3 seconds.
try
{
Ping ping = new Ping();
ping.PingCompleted += (sender, e) =>
{
if (e.Reply.Status != IPStatus.Success)
// Report fail
else
// Report success
};
ping.SendAsync(target, 3000, target); // Timeout is 3 seconds here
}
catch (Exception)
{
return;
}
Ping.Send() has an overload with a timeout parameter:
PingReply reply = x.Send(IPAddress.Parse("192.168.0.1"), 100);
You could use an async delegate to kick off the Ping. The async delegate has a function called BeginInvoke that will kick off a background thread that will immediately return a IAsyncResult. The IAsyncResult has a wait handler called AsyncWaitHandle which has a method called WaitOne which can be assigned a time to wait. This will freeze the current thread a given time in milliseconds, in your case 100, then you can use the property IsCompleted to check to see if the thread has completed its work. For Example:
Func<PingReply> pingDelegate = () => new Ping().Send(IPAddress.Parse("192.168.0.1"));
IAsyncResult result = pingDelegate.BeginInvoke(r => pingDelegate.EndInvoke(r), null);
//wait for thread to complete
result.AsyncWaitHandle.WaitOne(100);
if (result.IsCompleted)
{
//Ping Succeeded do something
PingReply reply = (PingReply) result;
//Do something with successful reply
}
I created a live host scanner too. It uses ARP to check if a computer is online.
An ARP request is much faster than if you'd ping a host.
Here's the code I used to check if a Host is available:
//You'll need this pinvoke signature as it is not part of the .Net framework
[DllImport("iphlpapi.dll", ExactSpelling = true)]
public static extern int SendARP(int DestIP, int SrcIP,
byte[] pMacAddr, ref uint PhyAddrLen);
//These vars are needed, if the the request was a success
//the MAC address of the host is returned in macAddr
private byte[] macAddr = new byte[6];
private uint macAddrLen;
//Here you can put the IP that should be checked
private IPAddress Destination = IPAddress.Parse("127.0.0.1");
//Send Request and check if the host is there
if (SendARP((int)Destination.Address, 0, macAddr, ref macAddrLen) == 0)
{
//SUCCESS! Igor it's alive!
}
If you're interested Nmap also uses this technique to scan for available hosts.
ARP scan puts Nmap and its optimized algorithms in charge of ARP requests. And if it gets a response back, Nmap doesn't even need to worry about the IP-based ping packets since it already knows the host is up. This makes ARP scan much faster and more reliable than IP-based scans. So it is done by default when scanning ethernet hosts that Nmap detects are on a local ethernet network. Even if different ping types (such as -PE or -PS) are specified, Nmap uses ARP instead for any of the targets which are on the same LAN.
This only works within the current subnet! As long as there is no router between the requesting machine and the target it should work fine.
ARP is a non-routable protocol, and can therefore only be used between systems on the same Ethernet network. [...]
arp-scan can be used to discover IP hosts on the local network. It can discover all hosts, including those that block all IP traffic such as firewalls and systems with ingress filters. - Excerpt from NTA-Monitor wiki
For more information on the SendARP function you can check the pinvoke.net documentation.
I am trying to do "long polling" with an HttpWebRequest object.
In my C# app, I am making an HTTP GET request, using HttpWebRequest. And then afterwards, I wait for the response with beginGetResponse(). I am using ThreadPool.RegisterWaitForSingleObject to wait for the response, or to timeout (after 1 minute).
I have set the target web server to take a long time to respond. So that, I have time to disconnect the network cable.
After sending the request, I pull the network cable.
Is there a way to get an exception when this happens? So I don't have to wait for the timeout?
Instead of an exception, the timeout (from RegisterWaitForSingleObject) happens after the 1 minute timeout has expired.
Is there a way to determine that the network connection went down? Currently, this situation is indistinguishable from the case where the web server takes more than 1 minute to respond.
I found a solution:
Before calling beginGetResponse, I can call the following on the HttpWebRequest:
req.ServicePoint.SetTcpKeepAlive( true, 10000, 1000)
I think this means that after 10 seconds of inactivity, the client will send a TCP "keep alive" over to the server. That keep alive will fail if the network connection is down because the network cable was pulled.
So, when the cable is pulled, I a keep alive gets sent within 10 seconds (at most), and then the callback for BeginGetResponse happens. In the callback, I get and exception when I call req.EndGetResponse().
I guess this defeats one of the benefits of long polling, though. Since we're still sending packets around.
I'll leave it to you to try pulling the plug on this.
ManualResetEvent done = new ManualResetEvent(false);
void Main()
{
// set physical address of network adapter to monitor operational status
string physicalAddress = "00215A6B4D0F";
// create web request
var request = (HttpWebRequest)HttpWebRequest.Create(new Uri("http://stackoverflow.com"));
// create timer to cancel operation on loss of network
var timer = new System.Threading.Timer((s) =>
{
NetworkInterface networkInterface =
NetworkInterface.GetAllNetworkInterfaces()
.FirstOrDefault(nic => nic.GetPhysicalAddress().ToString() == physicalAddress);
if(networkInterface == null)
{
throw new Exception("Could not find network interface with phisical address " + physicalAddress + ".");
}
else if(networkInterface.OperationalStatus != OperationalStatus.Up)
{
Console.WriteLine ("Network is down, aborting.");
request.Abort();
done.Set();
}
else
{
Console.WriteLine ("Network is still up.");
}
}, null, 100, 100);
// start asynchronous request
IAsyncResult asynchResult = request.BeginGetResponse(new AsyncCallback((o) =>
{
try
{
var response = (HttpWebResponse)request.EndGetResponse((IAsyncResult)o);
var reader = new StreamReader(response.GetResponseStream(), System.Text.Encoding.UTF8);
var writer = new StringWriter();
writer.Write(reader.ReadToEnd());
Console.Write(writer.ToString());
}
finally
{
done.Set();
}
}), null);
// wait for the end
done.WaitOne();
}
I dont think you are gonna like this. You can test for internet connectivity after you create the request to the slow server.
There are many ways to do that - from another request to google.com (or some ip address in your network) to P/Invoke. You can get more info here: Fastest way to test internet connection
After you create the original request you go in a loop that checks for internet connectivity and until either the internet is down or the original request comes back (it can set a variable to stop the loop).
Helps at all?
I am working on a project on Visual Studio C#.
I am collecting data from a device connected to PC via serial port.
First I send a request command, and wait for response.
There is a 1 sec delay of device to response after sending request command.
The thing is device may not be reached and may not response sometimes.
In order to wait response (if any) and not to sent next data request command early, I make a delay by: System.Threading.Thread method.
My question is, if I make that delay time longer, do I loose serial port data receiving.
The Delay function I use is:
private void Delay(byte WaitMiliSec)
{
// WaitTime here is increased by a WaitTimer ticking at every 100msec
WaitTime = 0;
while (WaitTime < WaitMiliSec)
{
System.Threading.Thread.Sleep(25);
Application.DoEvents();
}
}
no - you won't loose any data - the serial-port has it's own buffer which does not depend on your application at all. The OS and the hardware will handle this for your.
I would suggest to refactor the data-send/receive into it's own task/thread. That way you don't need the Application.DoEvents();
If you post some more of your send/receive code I might help you with this.
PS: it seems to me that your code will not work anyhow (WaitTime is allways zero) but I guess it's just a snippet right?
I'm working on a C# Server application for a game engine I'm writing in ActionScript 3. I'm using an authoritative server model as to prevent cheating and ensure fair game. So far, everything works well:
When the client begins moving, it tells the server and starts rendering locally; the server, then, tells everyone else that client X has began moving, among with details so they can also begin rendering. When the client stops moving, it tells the server, which performs calculations based on the time the client began moving and the client render tick delay and replies to everyone, so they can update with the correct values.
The thing is, when I use the default 20ms tick delay on server calculations, when the client moves for a rather long distance, there's a noticeable tilt forward when it stops. If I increase slightly the delay to 22ms, on my local network everything runs very smoothly, but in other locations, the tilt is still there. After experimenting a little, I noticed that the extra delay needed is pretty much tied to the latency between client and server. I even boiled it down to a formula that would work quite nicely: delay = 20 + (latency / 10).
So, how would I proceed to obtain the latency between a certain client and the server (I'm using asynchronous sockets). The CPU effort can't be too much, as to not have the server run slowly. Also, is this really the best way, or is there a more efficient/easier way to do this?
Sorry that this isn't directly answering your question, but generally speaking you shouldn't rely too heavily on measuring latency because it can be quite variable. Not only that, you don't know if the ping time you measure is even symmetrical, which is important. There's no point applying 10ms of latency correction if it turns out that the ping time of 20ms is actually 19ms from server to client and 1ms from client to server. And latency in application terms is not the same as in networking terms - you may be able to ping a certain machine and get a response in 20ms but if you're contacting a server on that machine that only processes network input 50 times a second then your responses will be delayed by an extra 0 to 20ms, and this will vary rather unpredictably.
That's not to say latency measurement it doesn't have a place in smoothing predictions out, but it's not going to solve your problem, just clean it up a bit.
On the face of it, the problem here seems to be that that you're sent information in the first message which you use to extrapolate data from until the last message is received. If all else stays constant then the movement vector given in the first message multiplied by the time between the messages will give the server the correct end position that the client was in at roughly now-(latency/2). But if the latency changes at all, the time between the messages will grow or shrink. The client may know he's moved 10 units, but the server simulated him moving 9 or 11 units before being told to snap him back to 10 units.
The general solution to this is to not assume that latency will stay constant but to send periodic position updates, which allow the server to verify and correct the client's position. With just 2 messages as you have now, all the error is found and corrected after the 2nd message. With more messages, the error is spread over many more sample points allowing for smoother and less visible correction.
It can never be perfect though: all it takes is a lag spike in the last millisecond of movement and the server's representation will overshoot. You can't get around that if you're predicting future movement based on past events, as there's no real alternative to choosing either correct-but-late or incorrect-but-timely since information takes time to travel. (Blame Einstein.)
One thing to keep in mind when using ICMP based pings is that networking equipment will often give ICMP traffic lower priority than normal packets, especially when the packets cross network boundaries such as WAN links. This can lead to pings being dropped or showing higher latency than traffic is actually experiencing and lends itself to being an indicator of problems rather than a measurement tool.
The increasing use of Quality of Service (QoS) in networks only exacerbates this and as a consequence though ping still remains a useful tool, it needs to be understood that it may not be a true reflection of the network latency for non-ICMP based real traffic.
There is a good post at the Itrinegy blog How do you measure Latency (RTT) in a network these days? about this.
You could use the already available Ping Class. Should be preferred over writing your own IMHO.
Have a "ping" command, where you send a message from the server to the client, then time how long it takes to get a response. Barring CPU overload scenarios, it should be pretty reliable. To get the one-way trip time, just divide the time by 2.
We can measure the round-trip time using the Ping class of the .NET Framework.
Instantiate a Ping and subscribe to the PingCompleted event:
Ping pingSender = new Ping();
pingSender.PingCompleted += PingCompletedCallback;
Add code to configure and action the ping.
Our PingCompleted event handler (PingCompletedEventHandler) has a PingCompletedEventArgs argument. The PingCompletedEventArgs.Reply gets us a PingReply object. PingReply.RoundtripTime returns the round trip time (the "number of milliseconds taken to send an Internet Control Message Protocol (ICMP) echo request and receive the corresponding ICMP echo reply message"):
public static void PingCompletedCallback(object sender, PingCompletedEventArgs e)
{
...
Console.WriteLine($"Roundtrip Time: {e.Reply.RoundtripTime}");
...
}
Code-dump of a full working example, based on MSDN's example. I have modified it to write the RTT to the console:
public static void Main(string[] args)
{
string who = "www.google.com";
AutoResetEvent waiter = new AutoResetEvent(false);
Ping pingSender = new Ping();
// When the PingCompleted event is raised,
// the PingCompletedCallback method is called.
pingSender.PingCompleted += PingCompletedCallback;
// Create a buffer of 32 bytes of data to be transmitted.
string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
byte[] buffer = Encoding.ASCII.GetBytes(data);
// Wait 12 seconds for a reply.
int timeout = 12000;
// Set options for transmission:
// The data can go through 64 gateways or routers
// before it is destroyed, and the data packet
// cannot be fragmented.
PingOptions options = new PingOptions(64, true);
Console.WriteLine("Time to live: {0}", options.Ttl);
Console.WriteLine("Don't fragment: {0}", options.DontFragment);
// Send the ping asynchronously.
// Use the waiter as the user token.
// When the callback completes, it can wake up this thread.
pingSender.SendAsync(who, timeout, buffer, options, waiter);
// Prevent this example application from ending.
// A real application should do something useful
// when possible.
waiter.WaitOne();
Console.WriteLine("Ping example completed.");
}
public static void PingCompletedCallback(object sender, PingCompletedEventArgs e)
{
// If the operation was canceled, display a message to the user.
if (e.Cancelled)
{
Console.WriteLine("Ping canceled.");
// Let the main thread resume.
// UserToken is the AutoResetEvent object that the main thread
// is waiting for.
((AutoResetEvent)e.UserState).Set();
}
// If an error occurred, display the exception to the user.
if (e.Error != null)
{
Console.WriteLine("Ping failed:");
Console.WriteLine(e.Error.ToString());
// Let the main thread resume.
((AutoResetEvent)e.UserState).Set();
}
Console.WriteLine($"Roundtrip Time: {e.Reply.RoundtripTime}");
// Let the main thread resume.
((AutoResetEvent)e.UserState).Set();
}
You might want to perform several pings and then calculate an average, depending on your requirements of course.