Getting Faster result on Ping in C# - c#

I'm making a tool to test a connection to certain host using a class "PingReply" in .NET. My problem is it takes a while to get a result if the ping result is a fail. It is LAN environment so i can already assume that the connection is failed if it takes more than 100ms. The code below shows a result after 5 seconds, which is 5000ms, if the connection to the host fails. Can i get the faster result even though the connection is failed?
Ping x = new Ping();
PingReply reply = x.Send(IPAddress.Parse("192.168.0.1"));
if (reply.Status == IPStatus.Success)
{
//Do something
}

You can pass a timeout to the Ping.Send() method. Please check out the overloaded members.

Since we can't see your ping object, ill assume you don't know about TIMEOUT. I usually send an async ping, and set the timeout to 3 seconds.
try
{
Ping ping = new Ping();
ping.PingCompleted += (sender, e) =>
{
if (e.Reply.Status != IPStatus.Success)
// Report fail
else
// Report success
};
ping.SendAsync(target, 3000, target); // Timeout is 3 seconds here
}
catch (Exception)
{
return;
}

Ping.Send() has an overload with a timeout parameter:
PingReply reply = x.Send(IPAddress.Parse("192.168.0.1"), 100);

You could use an async delegate to kick off the Ping. The async delegate has a function called BeginInvoke that will kick off a background thread that will immediately return a IAsyncResult. The IAsyncResult has a wait handler called AsyncWaitHandle which has a method called WaitOne which can be assigned a time to wait. This will freeze the current thread a given time in milliseconds, in your case 100, then you can use the property IsCompleted to check to see if the thread has completed its work. For Example:
Func<PingReply> pingDelegate = () => new Ping().Send(IPAddress.Parse("192.168.0.1"));
IAsyncResult result = pingDelegate.BeginInvoke(r => pingDelegate.EndInvoke(r), null);
//wait for thread to complete
result.AsyncWaitHandle.WaitOne(100);
if (result.IsCompleted)
{
//Ping Succeeded do something
PingReply reply = (PingReply) result;
//Do something with successful reply
}

I created a live host scanner too. It uses ARP to check if a computer is online.
An ARP request is much faster than if you'd ping a host.
Here's the code I used to check if a Host is available:
//You'll need this pinvoke signature as it is not part of the .Net framework
[DllImport("iphlpapi.dll", ExactSpelling = true)]
public static extern int SendARP(int DestIP, int SrcIP,
byte[] pMacAddr, ref uint PhyAddrLen);
//These vars are needed, if the the request was a success
//the MAC address of the host is returned in macAddr
private byte[] macAddr = new byte[6];
private uint macAddrLen;
//Here you can put the IP that should be checked
private IPAddress Destination = IPAddress.Parse("127.0.0.1");
//Send Request and check if the host is there
if (SendARP((int)Destination.Address, 0, macAddr, ref macAddrLen) == 0)
{
//SUCCESS! Igor it's alive!
}
If you're interested Nmap also uses this technique to scan for available hosts.
ARP scan puts Nmap and its optimized algorithms in charge of ARP requests. And if it gets a response back, Nmap doesn't even need to worry about the IP-based ping packets since it already knows the host is up. This makes ARP scan much faster and more reliable than IP-based scans. So it is done by default when scanning ethernet hosts that Nmap detects are on a local ethernet network. Even if different ping types (such as -PE or -PS) are specified, Nmap uses ARP instead for any of the targets which are on the same LAN.
This only works within the current subnet! As long as there is no router between the requesting machine and the target it should work fine.
ARP is a non-routable protocol, and can therefore only be used between systems on the same Ethernet network. [...]
arp-scan can be used to discover IP hosts on the local network. It can discover all hosts, including those that block all IP traffic such as firewalls and systems with ingress filters. - Excerpt from NTA-Monitor wiki
For more information on the SendARP function you can check the pinvoke.net documentation.

Related

Determine broken connection in TCP server

I wrote a tcp server, each time a client connection accepted, the socket instance returned by Accept or EndAccept which is called handler and many other information gathered in object called TcpClientConnection, I need to determine whether a connection is connected or not at some specific interval times, the Socket.Connected property is not reliable and according to the documentation i should use the Poll method with SelectRead option to do it.
with a test scenario i unplug the client cable, and wait for broken alarm which is built upon the handler.Poll(1, SelectMode.SelectRead), it should return true but never it happened.
This is a fundamentally caused by the way the TCP and IP protocols work. The only way to detect if a connection is disconnected is to send some data over the connection. The underlying TCP protocol will cause acknowledgements to be sent from the receiver back to the sender thereby allowing a broken connection to be detected.
These articles provide some more information
Do I need to heartbeat to keep a TCP connection open?
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
According to the documentation of Socket.Poll:
This method cannot detect certain kinds of connection problems, such as a broken network cable, or that the remote host was shut down ungracefully. You must attempt to send or receive data to detect these kinds of errors.
In another words - Poll is useful for checking if some data arrived and are available to your local OS networking stack.
If you'd need to detect the connection issues you need to call blocking read (e.g. Socket.Receive)
You can also build a simple initialization miniprotocol to exchange some agreed 'hello' back and forth message.
Here is a simplified example how you can do it:
private bool VerifyConnection(Socket socket)
{
byte[] b = new byte[1];
try
{
if (socket.Receive(b, 0, 1, SocketFlags.None) == 0)
throw new SocketException(System.Convert.ToInt32(SocketError.ConnectionReset));
socket.NoDelay = true;
socket.Send(new byte[1] { SocketHelper.HelloByte });
socket.NoDelay = false;
}
catch (Exception e)
{
this._logger.LogException(LogLevel.Fatal, e, "Attempt to connect (from: [{0}]), but encountered error during reading initialization message", socket.RemoteEndPoint);
socket.TryCloseSocket(this._logger);
return false;
}
if (b[0] != SocketHelper.HelloByte)
{
this._logger.Log(LogLevel.Fatal,
"Attempt to connect (from: [{0}]), but incorrect initialization byte sent: [{1}], Ignoring the attempt",
socket.RemoteEndPoint, b[0]);
socket.TryCloseSocket(this._logger);
return false;
}
return true;
}

C# Ping delay/slow under Mono

I'm experiencing a delay issue with Ping.Send in C# .Net 4.5 running under Mono 3.2.8. My code looks like this:
using(var sw = new StreamWriter("/ping.txt"))
{
var ping = new Ping();
PingReply reply;
sw.WriteLine("Pre ping: {0}", DateTime.Now);
// Ping local machine
reply = ping.Send("172.16.1.100", 60);
sw.WriteLine("Post ping: {0}", DateTime.Now);
if (reply != null && reply.Status == IPStatus.Success)
{
sw.WriteLine("Success! RTT: {0}", reply.RoundtripTime);
}
sw.WriteLine("Pre ping: {0}", DateTime.Now);
// Ping Google
reply = ping.Send("216.58.220.110", 60);
sw.WriteLine("Post ping: {0}", DateTime.Now);
if (reply != null && reply.Status == IPStatus.Success)
{
sw.WriteLine("Success! RTT: {0}", reply.RoundtripTime);
}
}
The output from running the above code under Mono on Linux is:
Pre ping: 03/17/2015 15:43:21
Post ping: 03/17/2015 15:43:41
Success! RTT: 2
Pre ping: 03/17/2015 15:43:41
Post ping: 03/17/2015 15:44:01
Success! RTT: 46
You can see that between the "Pre" and "Post" timestamps, there is a delay of 20 seconds (this is consistent, it's always 20 seconds). The machine running Mono is on the same 172.16.1.* network, I threw the Google ping in there for an extra test.
Running the same code locally on my Windows machine produces the following output (no delay on the pings):
Pre ping: 17/03/2015 3:38:21 PM
Post ping: 17/03/2015 3:38:21 PM
Success! RTT: 3
Pre ping: 17/03/2015 3:38:21 PM
Post ping: 17/03/2015 3:38:21 PM
Success! RTT: 46
Any ideas as to what's going on here? I have the need for pinging hundreds of machines, so a delay of 20 seconds between pings isn't acceptable.
UPDATE:
I've tried using the Ping.SendAsync method with the code below:
private void PingAsyncTest()
{
var ipAddresses = new List<String> { "172.16.1.100", "216.58.220.110" };
foreach (var ipAddress in ipAddresses)
{
using (var ping = new Ping())
{
ping.PingCompleted += PingCompleted;
ping.SendAsync(IPAddress.Parse(ipAddress), 1000);
}
}
}
private void PingCompleted(object sender, PingCompletedEventArgs e)
{
if (e.Reply.Status == IPStatus.Success)
{
// Update successful ping in the DB.
}
}
I'm still seeing the 20 second delay between the SendAsync call and when the reply comes into PingCompleted. This is slightly nicer than the original code where the application would wait the 20 seconds before sending off the next ping. This way all pings are sent and received asynchronously, so there is no need to wait 20 seconds for each ping. Still not ideal though.
The way this goes depends very much on how the permissions are set up.
If your application gets enough permissions, it will directly try to send an ICMP request. On the other hand, if it's not allowed to send ICMP, it will run ping executable (trying to find it in /bin/ping, /sbin/ping and /usr/sbin/ping).
First thing you might want to check is which of those actually happens. Does ping execute while you're trying to do the pings? Does it help if you sudo your application?
The default timeout is four seconds, so it shouldn't ever take 20 seconds - you should have gotten a timeout long before that. And you're explicitly passing a timeout of 60 milliseconds.
All this (along with a good look at the code handling pings in Mono) suggests one of those:
The 20s are required for the initial setup of the Ping class itself - querying for capabilities, finding ping etc. This obviously isn't the case, since you're trying two pings and each of them takes this long.
Most of the time is spent outside of the actual ICMP/ping code. The most likely place being for example Dns.GetHostName or Dns.GetHostAddresses. Check both separately from the ping itself.
Some other thread / process is interfering with your own pings. The ICMP socket will get all the ICMP responses, since there's no concept of ports etc. in ICMP.
The last point is also alluding to another issue - if you're trying to ping a lot of different hosts, you really don't want to use Ping, at least not on Linux. Instead, you'll want to ensure your application runs priviledged (enough permissions to do raw ICMP), and handle all the ICMP requests and replies over a single Socket. If you send 100 requests in parallel using Ping.Send, each of those Ping.Sends will have to go through all the replies, not just the one they are expecting. Also, using 60ms as a timeout doesn't sound like a good idea, since the code is using DateTime.Now to check the timeouts, which can have very low timeout resolution.
Instead of sending a request and waiting for a reply, you really want to use asynchronous sockets to send and receive all the time, until you go through all the hosts you want to ping, while checking for the ones where you didn't get a reply in time.

Weird tcp connection scenario

I am using TCP as a mechanism for keep alive here is my code:
Client
TcpClient keepAliveTcpClient = new TcpClient();
keepAliveTcpClient.Connect(HostId, tcpPort);
//this 'read' is supposed to blocked till a legal disconnect is requested
//or till the server unexpectedly dissapears
int numberOfByptes = keepAliveTcpClient.GetStream().Read(new byte[10], 0, 10);
//more client code...
Server
TcpListener _tcpListener = new TcpListener(IPAddress.Any, 1000);
_tcpListener.Start();
_tcpClient = _tcpListener.AcceptTcpClient();
Tracer.Write(Tracer.TraceLevel.INFO, "get a client");
buffer = new byte[10];
numOfBytes = _tcpClient.GetStream().Read(buffer, 0, buffer.Length);
if(numOfBytes==0)
{
//shouldn't reach here unless the connection is close...
}
I put only the relevant code... Now what that happens is that the client code is block on read as expected, but the server read return immediately with numOfBytes equals to 0, even if I retry to do read on the server it return immediately... but the client read is still block! so in the server side I think mistakenly that the client is disconnected from the server but the client thinks it connected to the server... someone can tell how it is possible? or what is wrong with my mechanism?
Edit: After a failure I wrote to the log these properties:
_tcpClient: _tcpClient.Connected=true
Socket: (_tcpClient.Client properties)
_tcpClient.Client.Available=0
_tcpClient.Client.Blocking=true
_tcpClient.Client.Connected=true
_tcpClient.Client.IsBound=true
Stream details
_tcpClient.GetStream().DataAvailable=false;
Even when correctly implemented, this approach will only detect some remote server failures. Consider the case where the intervening network partitions the two machines. Then, only when the underlying TCP stack sends a transport level keep-alive will the system detect the failure. Keepalive is a good description of the problem. [Does a TCP socket connection have a “keep alive”?] 2 is a companion question. The RFC indicates the functionality is optional.
The only certain way to reliably confirm that the other party is still alive is to occasionally send actual data between the two endpoints. This will result in TCP promptly detecting the failure and reporting it back to the application.
Maybe something that will give clue: it happens only when 10 or more clients
connect the server the same time(the server listen to 10 or more ports).
If you're writing this code on Windows 7/8, you may be running into a connection limit issue. Microsoft's license allows 20 concurrent connections, but the wording is very specific:
[Start->Run->winver, click "Microsoft Software License Terms"]
3e. Device Connections. You may allow up to 20 other devices to access software installed on the licensed computer to use only File Services, Print Services, Internet Information Services and Internet Connection Sharing and Telephony Services.
Since what you're doing isn't file, print, IIS, ICS, or telephony, it's possible that the previous connection limit of 10 from XP/Vista is still enforced in these circumstances. Set a limit of concurrent connections to 9 in your code temporarily, and see if it keeps happening.
The way I am interpretting the MSDN remarks it seems that behavior is expected. If you have no data the Read the method returns.
With that in mind I think what I would try is to send data at a specified interval like some of the previous suggestions along with a "timeout" of some sort. If you don't see the "ping" within your designated interval you could fail the keepalive. With TCP you have to keep in mind that there is no requirement to deem a connection "broken" just because you aren't seeing data. You could completely unplug the network cables and the connection will still be considered good up until the point that you send some data. Once you send data you'll see one of 2 behaviors. Either you'll never see a response (listening machine was shutdown?) or you'll get an "ack-reset" (listening machine is no longer listening on that particular socket)
https://msdn.microsoft.com/en-us/library/vstudio/system.net.sockets.networkstream.read(v=vs.100).aspx
Remarks:
This method reads data into the buffer parameter and returns the number of bytes successfully read. If no data is available for reading, the Read method returns 0. The Read operation reads as much data as is available, up to the number of bytes specified by the size parameter. If the remote host shuts down the connection, and all available data has been received, the Read method completes immediately and return zero bytes.
As I can see you are reading data on both sides, server and client. You need to write some data from the server to the client, to ensure that your client will have something to read. You can find a small test program below (The Task stuff is just to run the Server and Client in the same program).
class Program
{
private static Task _tcpServerTask;
private const int ServerPort = 1000;
static void Main(string[] args)
{
StartTcpServer();
KeepAlive();
Console.ReadKey();
}
private static void StartTcpServer()
{
_tcpServerTask = new Task(() =>
{
var tcpListener = new TcpListener(IPAddress.Any, ServerPort);
tcpListener.Start();
var tcpClient = tcpListener.AcceptTcpClient();
Console.WriteLine("Server got client ...");
using (var stream = tcpClient.GetStream())
{
const string message = "Stay alive!!!";
var arrayMessage = Encoding.UTF8.GetBytes(message);
stream.Write(arrayMessage, 0, arrayMessage.Length);
}
tcpListener.Stop();
});
_tcpServerTask.Start();
}
private static void KeepAlive()
{
var tcpClient = new TcpClient();
tcpClient.Connect("127.0.0.1", ServerPort);
using (var stream = tcpClient.GetStream())
{
var buffer = new byte[16];
while (stream.Read(buffer, 0, buffer.Length) != 0)
Console.WriteLine("Client received: {0} ", Encoding.UTF8.GetString(buffer));
}
}
}

C# UDP send/sendAsync too slow

I'm trying to send many small (~350 byte) UDP messages to different remote hosts, one packet for each remote host. I'm using a background thread to listen for responses
private void ReceivePackets()
{
while (true)
{
try
{
receiveFrom = new IPEndPoint(IPAddress.Any, localPort);
byte[] data = udpClientReceive.Receive(ref receiveFrom);
// Decode message & short calculation
}
catch (Exception ex)
{
Log("Error receiving data: " + ex.ToString());
}
}
}
and a main thread for sending messages using
udpClientSend.SendAsync(send_buffer, send_buffer.Length, destinationIPEP);
Both UdpClient udpClientReceive and UdpClient udpClientSend are bound to the same port.
The problem is SendAsync() takes around 15ms to complete and I need to send a few thousand packets per second. What I already tried is using udpClientSend.Send(send_buffer, send_buffer.Length, destination);which is just as slow. I also set both receive/send buffers higher and I tried setting udpClientSend.Client.SendTimeout = 1; which has no effect. I suspect it might have to do with the remote host changing for every single packet? If that is the case, will using many UDPclients in seperate threads make things faster?
Thanks for any help!
Notes:
Network bandwidth is not the problem and I need to use UDP not TCP.
I've seen similar questions on this website but none have a satisfying answer.
Edit
There is only one thread for sending, it runs a simple loop in which udpClientSend.SendAsync() is called.
I'm querying nodes in the DHT (bittorrent hashtable) so multicasting is not an option (?) - every host only gets 1 packet.
Exchanging UDPClient class with the Socket class and using AsyncSendTo() does not speed things up (or insignificantly).
I have narrowed down the problem: Changing the remote host address to some fixed IP & port increases throughput to over 3000 packets/s. Thus changing the destination address too often seems to be the bottleneck.
I'm thinking my problem might be related to UDP "Connect"-Speed in C# and UDPClient.Connect() is slowing down the code. If so, is there a fix for this? Is it a language or an OS problem?
Why are you using UdpClient Connect before sending? It's an optional step if you want to set a default host. Looks like you want to send to multiple destinations so you can set the remote host each time you call the Send method.
If you decide to remove the Connect method you may see an improvement, but it still looks like you have a bottleneck somewhere else in your code. Isolate the Send calls and measure time then start adding more steps until you find what's slowing you down.
var port = 4242;
var ipEndPoint1 = new IPEndPoint(IPAddress.Parse("192.168.56.101"), port);
var ipEndPoint2 = new IPEndPoint(IPAddress.Parse("192.168.56.102"), port);
var buff = new byte[350];
var client = new UdpClient();
int count = 0;
var stopWatch = new Stopwatch();
stopWatch.Start();
while (count < 3000)
{
IPEndPoint endpoint = ipEndPoint1;
if ((count % 2) == 0)
endpoint = ipEndPoint2;
client.Send(buff, buff.Length, endpoint);
count++;
}
stopWatch.Stop();
Console.WriteLine("RunTime " + stopWatch.Elapsed.TotalMilliseconds.ToString());

Handling network disconnect

I am trying to do "long polling" with an HttpWebRequest object.
In my C# app, I am making an HTTP GET request, using HttpWebRequest. And then afterwards, I wait for the response with beginGetResponse(). I am using ThreadPool.RegisterWaitForSingleObject to wait for the response, or to timeout (after 1 minute).
I have set the target web server to take a long time to respond. So that, I have time to disconnect the network cable.
After sending the request, I pull the network cable.
Is there a way to get an exception when this happens? So I don't have to wait for the timeout?
Instead of an exception, the timeout (from RegisterWaitForSingleObject) happens after the 1 minute timeout has expired.
Is there a way to determine that the network connection went down? Currently, this situation is indistinguishable from the case where the web server takes more than 1 minute to respond.
I found a solution:
Before calling beginGetResponse, I can call the following on the HttpWebRequest:
req.ServicePoint.SetTcpKeepAlive( true, 10000, 1000)
I think this means that after 10 seconds of inactivity, the client will send a TCP "keep alive" over to the server. That keep alive will fail if the network connection is down because the network cable was pulled.
So, when the cable is pulled, I a keep alive gets sent within 10 seconds (at most), and then the callback for BeginGetResponse happens. In the callback, I get and exception when I call req.EndGetResponse().
I guess this defeats one of the benefits of long polling, though. Since we're still sending packets around.
I'll leave it to you to try pulling the plug on this.
ManualResetEvent done = new ManualResetEvent(false);
void Main()
{
// set physical address of network adapter to monitor operational status
string physicalAddress = "00215A6B4D0F";
// create web request
var request = (HttpWebRequest)HttpWebRequest.Create(new Uri("http://stackoverflow.com"));
// create timer to cancel operation on loss of network
var timer = new System.Threading.Timer((s) =>
{
NetworkInterface networkInterface =
NetworkInterface.GetAllNetworkInterfaces()
.FirstOrDefault(nic => nic.GetPhysicalAddress().ToString() == physicalAddress);
if(networkInterface == null)
{
throw new Exception("Could not find network interface with phisical address " + physicalAddress + ".");
}
else if(networkInterface.OperationalStatus != OperationalStatus.Up)
{
Console.WriteLine ("Network is down, aborting.");
request.Abort();
done.Set();
}
else
{
Console.WriteLine ("Network is still up.");
}
}, null, 100, 100);
// start asynchronous request
IAsyncResult asynchResult = request.BeginGetResponse(new AsyncCallback((o) =>
{
try
{
var response = (HttpWebResponse)request.EndGetResponse((IAsyncResult)o);
var reader = new StreamReader(response.GetResponseStream(), System.Text.Encoding.UTF8);
var writer = new StringWriter();
writer.Write(reader.ReadToEnd());
Console.Write(writer.ToString());
}
finally
{
done.Set();
}
}), null);
// wait for the end
done.WaitOne();
}
I dont think you are gonna like this. You can test for internet connectivity after you create the request to the slow server.
There are many ways to do that - from another request to google.com (or some ip address in your network) to P/Invoke. You can get more info here: Fastest way to test internet connection
After you create the original request you go in a loop that checks for internet connectivity and until either the internet is down or the original request comes back (it can set a variable to stop the loop).
Helps at all?

Categories