How do I obtain the latency between server and client in C#? - c#

I'm working on a C# Server application for a game engine I'm writing in ActionScript 3. I'm using an authoritative server model as to prevent cheating and ensure fair game. So far, everything works well:
When the client begins moving, it tells the server and starts rendering locally; the server, then, tells everyone else that client X has began moving, among with details so they can also begin rendering. When the client stops moving, it tells the server, which performs calculations based on the time the client began moving and the client render tick delay and replies to everyone, so they can update with the correct values.
The thing is, when I use the default 20ms tick delay on server calculations, when the client moves for a rather long distance, there's a noticeable tilt forward when it stops. If I increase slightly the delay to 22ms, on my local network everything runs very smoothly, but in other locations, the tilt is still there. After experimenting a little, I noticed that the extra delay needed is pretty much tied to the latency between client and server. I even boiled it down to a formula that would work quite nicely: delay = 20 + (latency / 10).
So, how would I proceed to obtain the latency between a certain client and the server (I'm using asynchronous sockets). The CPU effort can't be too much, as to not have the server run slowly. Also, is this really the best way, or is there a more efficient/easier way to do this?

Sorry that this isn't directly answering your question, but generally speaking you shouldn't rely too heavily on measuring latency because it can be quite variable. Not only that, you don't know if the ping time you measure is even symmetrical, which is important. There's no point applying 10ms of latency correction if it turns out that the ping time of 20ms is actually 19ms from server to client and 1ms from client to server. And latency in application terms is not the same as in networking terms - you may be able to ping a certain machine and get a response in 20ms but if you're contacting a server on that machine that only processes network input 50 times a second then your responses will be delayed by an extra 0 to 20ms, and this will vary rather unpredictably.
That's not to say latency measurement it doesn't have a place in smoothing predictions out, but it's not going to solve your problem, just clean it up a bit.
On the face of it, the problem here seems to be that that you're sent information in the first message which you use to extrapolate data from until the last message is received. If all else stays constant then the movement vector given in the first message multiplied by the time between the messages will give the server the correct end position that the client was in at roughly now-(latency/2). But if the latency changes at all, the time between the messages will grow or shrink. The client may know he's moved 10 units, but the server simulated him moving 9 or 11 units before being told to snap him back to 10 units.
The general solution to this is to not assume that latency will stay constant but to send periodic position updates, which allow the server to verify and correct the client's position. With just 2 messages as you have now, all the error is found and corrected after the 2nd message. With more messages, the error is spread over many more sample points allowing for smoother and less visible correction.
It can never be perfect though: all it takes is a lag spike in the last millisecond of movement and the server's representation will overshoot. You can't get around that if you're predicting future movement based on past events, as there's no real alternative to choosing either correct-but-late or incorrect-but-timely since information takes time to travel. (Blame Einstein.)

One thing to keep in mind when using ICMP based pings is that networking equipment will often give ICMP traffic lower priority than normal packets, especially when the packets cross network boundaries such as WAN links. This can lead to pings being dropped or showing higher latency than traffic is actually experiencing and lends itself to being an indicator of problems rather than a measurement tool.
The increasing use of Quality of Service (QoS) in networks only exacerbates this and as a consequence though ping still remains a useful tool, it needs to be understood that it may not be a true reflection of the network latency for non-ICMP based real traffic.
There is a good post at the Itrinegy blog How do you measure Latency (RTT) in a network these days? about this.

You could use the already available Ping Class. Should be preferred over writing your own IMHO.

Have a "ping" command, where you send a message from the server to the client, then time how long it takes to get a response. Barring CPU overload scenarios, it should be pretty reliable. To get the one-way trip time, just divide the time by 2.

We can measure the round-trip time using the Ping class of the .NET Framework.
Instantiate a Ping and subscribe to the PingCompleted event:
Ping pingSender = new Ping();
pingSender.PingCompleted += PingCompletedCallback;
Add code to configure and action the ping.
Our PingCompleted event handler (PingCompletedEventHandler) has a PingCompletedEventArgs argument. The PingCompletedEventArgs.Reply gets us a PingReply object. PingReply.RoundtripTime returns the round trip time (the "number of milliseconds taken to send an Internet Control Message Protocol (ICMP) echo request and receive the corresponding ICMP echo reply message"):
public static void PingCompletedCallback(object sender, PingCompletedEventArgs e)
{
...
Console.WriteLine($"Roundtrip Time: {e.Reply.RoundtripTime}");
...
}
Code-dump of a full working example, based on MSDN's example. I have modified it to write the RTT to the console:
public static void Main(string[] args)
{
string who = "www.google.com";
AutoResetEvent waiter = new AutoResetEvent(false);
Ping pingSender = new Ping();
// When the PingCompleted event is raised,
// the PingCompletedCallback method is called.
pingSender.PingCompleted += PingCompletedCallback;
// Create a buffer of 32 bytes of data to be transmitted.
string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
byte[] buffer = Encoding.ASCII.GetBytes(data);
// Wait 12 seconds for a reply.
int timeout = 12000;
// Set options for transmission:
// The data can go through 64 gateways or routers
// before it is destroyed, and the data packet
// cannot be fragmented.
PingOptions options = new PingOptions(64, true);
Console.WriteLine("Time to live: {0}", options.Ttl);
Console.WriteLine("Don't fragment: {0}", options.DontFragment);
// Send the ping asynchronously.
// Use the waiter as the user token.
// When the callback completes, it can wake up this thread.
pingSender.SendAsync(who, timeout, buffer, options, waiter);
// Prevent this example application from ending.
// A real application should do something useful
// when possible.
waiter.WaitOne();
Console.WriteLine("Ping example completed.");
}
public static void PingCompletedCallback(object sender, PingCompletedEventArgs e)
{
// If the operation was canceled, display a message to the user.
if (e.Cancelled)
{
Console.WriteLine("Ping canceled.");
// Let the main thread resume.
// UserToken is the AutoResetEvent object that the main thread
// is waiting for.
((AutoResetEvent)e.UserState).Set();
}
// If an error occurred, display the exception to the user.
if (e.Error != null)
{
Console.WriteLine("Ping failed:");
Console.WriteLine(e.Error.ToString());
// Let the main thread resume.
((AutoResetEvent)e.UserState).Set();
}
Console.WriteLine($"Roundtrip Time: {e.Reply.RoundtripTime}");
// Let the main thread resume.
((AutoResetEvent)e.UserState).Set();
}
You might want to perform several pings and then calculate an average, depending on your requirements of course.

Related

Fastest way to "broadcast" to list of TCP clients

I'm currently writing a chat-server, bottom up, in C#.
It's like one single big room, with all the clients in, and then you can initiate private chats also. I've also laid the code out for future integration of multiple rooms (but not necessary right now).
It's been written mostly for fun, but also because I'm going to make a new chatsite for young people like myself, as there are no one left here in Denmark.
I've just tested it out with 170 clients (Written in Javascript with JQuery and a Flash bridge to socket connectivity). The response time on local network from a message being sent to it being delivered was less than 1 second. But now I'm considering what kind of performance I'm able to squeeze out of this.
I can see if I connect two clients and then 168 other, and write on client 2 and watch client 1, it comes up immediately on client 1. And the CPU usage and RAM usage shows no signs of server stress at all. It copes fine and I think it can scale to at least 1000 - 1500 without the slightest problem.
I have however noticed something, and that is if I open the 170 clients again and send a message on client 1 and watch on client 170, there is a log around 750 milliseconds or so.
I know the problem, and that is, when the server receives a chat message it broadcasts it to every client on the server. It does however need to enumerate all these clients, and that takes time. The delay right now is very acceptable for a chat, but I'm worried client 1 sending to client 750 maybe (not tested yet) will take 2 - 3 seconds. And i'm also worried when I begin to get maybe 2 - 3 messages a second.
So to sum it up, I want to speed up the server broadcasting process. I'm already utilizing a parallel foreach loop and I'm also using asynchronous sockets.
Here is the broadcasting code:
lock (_clientLock)
{
Parallel.ForEach(_clients, c =>
{
c.Value.Send(message);
});
}
And here is the send function being invoked on each client:
try {
byte[] bytesOut = System.Text.Encoding.UTF8.GetBytes(message + "\0");
_socket.BeginSend(bytesOut, 0, bytesOut.Length, SocketFlags.None, new AsyncCallback(OnSocketSent), null);
}
catch (Exception ex) { Drop(); }
I want to know if there is any way to speed this up?
I've considered writing some kind of helper class accepting messages in a que and then using maybe 20 threads or so, to split up the broadcasting list.
But I want to know YOUR opinions on this topic, I'm a student and I want to learn! (:
Btw. I like how you spot problems in your code when about to post to stack overflow. I've now made an overloaded function to accept a byte array from the server class when using broadcast, so the UTF-8 conversion only needs to happen once. Also to play it safe, the calculation of the byte array length only happens once now. See the updated version below.
But I'm still interested in ways of improving this even more!
Updated broadcast function:
lock (_clientLock)
{
byte[] bytesOut = System.Text.Encoding.UTF8.GetBytes(message + "\0");
int bytesOutLength = bytesOut.Length;
Parallel.ForEach(_clients, c =>
{
c.Value.Send(bytesOut, bytesOutLength);
});
}
Updated send function on client object:
public void Send(byte[] message, int length)
{
try
{
_socket.BeginSend(message, 0, length, SocketFlags.None, new AsyncCallback(OnSocketSent), null);
}
catch (Exception ex) { Drop(); }
}
~1s sounds really slow for a local network. Average LAN latency is 0.3ms. Is Nagle enabled or disabled? I'm guessing it is enabled... so: change that (Socket.NoDelay). That does mean you have to take responsibility for not writing to the socket in an overly-fragmented way, of course - so don't drip the message in character-by-character. Assemble the message to send (or better: multiple outstanding messages) in memory, and send it as a unit.

Waiting for networking C# console application to fully start

I have run into an issue with the slow C# start-up time causing UDP packets to drop initially. Below, I is what I have done to mitigate this start-up delay. I essentially wait an additional 10ms between the first two packet transmissions. This fixes the initial drops at least on my machine. My concern is that a slower machine may need a longer delay than this.
private void FlushPacketsToNetwork()
{
MemoryStream packetStream = new MemoryStream();
while (packetQ.Count != 0)
{
byte[] packetBytes = packetQ.Dequeue().ToArray();
packetStream.Write(packetBytes, 0, packetBytes.Length);
}
byte[] txArray = packetStream.ToArray();
udpSocket.Send(txArray);
txCount++;
ExecuteStartupDelay();
}
// socket takes too long to transmit unless I give it some time to "warm up"
private void ExecuteStartupDelay()
{
if (txCount < 3)
{
timer.SpinWait(10e-3);
}
}
So, I am wondering is there a better approach to let C# fully load all of its dependencies? I really don't mind if it takes several seconds to completely load; I just do not want to do any high bandwidth transmissions until C# is ready for full speed.
Additional relevant details
This is a console application, the network transmission is run from a separate thread, and the main thread just waits for a key press to terminate the network transmitter.
In the Program.Main method I have tried to get the most performance from my application by using the highest priorities reasonable:
public static void Main(string[] args)
{
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
...
Thread workerThread = new Thread(new ThreadStart(worker.Run));
workerThread.Priority = ThreadPriority.Highest;
workerThread.Start();
...
Console.WriteLine("Press any key to stop the stream...");
WaitForKeyPress();
worker.RequestStop = true;
workerThread.Join();
Also, the socket settings I am currently using are shown below:
udpSocket = new Socket(targetEndPoint.Address.AddressFamily,
SocketType.Dgram,
ProtocolType.Udp);
udpSocket.Ttl = ttl;
udpSocket.SendBufferSize = 1024 * 1024;
udpSocket.Blocking = true;
udpSocket.Connect(targetEndPoint);
The default SendBufferSize is 8192, so I went ahead and moved it up to a megabyte, but this setting did not seem to have any affect on the dropped packets at the beginning.
From the comments I learned that TCP is not an option for you (because of inherent delays in transmission), also you do not want to loose packets due to other side being not fully loaded.
So you actually need to implement some features present in TCP (retransmission) but in more robust and lightweight fashion. I also assume that you are in control of the receiving side.
I propose that you send some predetermined number of packets. And then wait for confirmation. For instance, every packet can have an id that constantly grows. Every N packets, receiving application sends the number of last received packet to the sender. After receiving this number sender will know if it is necessary to repeat last N packets.
This approach should not hurt your bandwidth very much and you will get some sort of information about received data (although not guaranteed).
Otherwise it is best to switch to TCP. By the way did you try using TCP? How much your bandwidth hurts because of it?

Does delaying blocks data receiving

I am working on a project on Visual Studio C#.
I am collecting data from a device connected to PC via serial port.
First I send a request command, and wait for response.
There is a 1 sec delay of device to response after sending request command.
The thing is device may not be reached and may not response sometimes.
In order to wait response (if any) and not to sent next data request command early, I make a delay by: System.Threading.Thread method.
My question is, if I make that delay time longer, do I loose serial port data receiving.
The Delay function I use is:
private void Delay(byte WaitMiliSec)
{
// WaitTime here is increased by a WaitTimer ticking at every 100msec
WaitTime = 0;
while (WaitTime < WaitMiliSec)
{
System.Threading.Thread.Sleep(25);
Application.DoEvents();
}
}
no - you won't loose any data - the serial-port has it's own buffer which does not depend on your application at all. The OS and the hardware will handle this for your.
I would suggest to refactor the data-send/receive into it's own task/thread. That way you don't need the Application.DoEvents();
If you post some more of your send/receive code I might help you with this.
PS: it seems to me that your code will not work anyhow (WaitTime is allways zero) but I guess it's just a snippet right?

C# .Net Serial DataReceived Event response too slow for high-rate data

I have set up a SerialDataReceivedEventHandler, with a forms based program in VS2008 express. My serial port is set up as follows:
115200, 8N1
Dtr and Rts enabled
ReceivedBytesThreshold = 1
I have a device I am interfacing with over a BlueTooth, USB to Serial. Hyper terminal receives the data just fine at any data rate. The data is sent regularly in 22 byte long packets. This device has an adjustable rate at which data is sent. At low data rates, 10-20Hz, the code below works great, no problems. However, when I increase the data rate past 25Hz, there starts to recieve mulitple packets on one call. What I mean by this is that there should be a event trigger for every incoming packet. With higher output rates, I have tested the buffer size (BytesToRead command) immediatly when the event is called and there are multiple packets in the buffer then. I think that the event fires slowly and by the time it reaches the code, more packes have hit the buffer. One test I do is see how many time the event is trigger per second. At 10Hz, I get 10 event triggers, awesome. At 100Hz, I get something like 40 event triggers, not good. My goal for data rate is 100HZ is acceptable, 200Hz preferred, and 300Hz optimum. This should work because even at 300Hz, that is only 52800bps, less than half of the set 115200 baud rate. Anything I am over looking?
public Form1()
{
InitializeComponent();
serialPort1.DataReceived += new SerialDataReceivedEventHandler(serialPort1_DataReceived);
}
private void serialPort1_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
this.Invoke(new EventHandler(Display_Results));
}
private void Display_Results(object s, EventArgs e)
{
serialPort1.Read(IMU, 0, serial_Port1.BytesToRead);
}
Did you try to ajust time latency on the USB serial converter? I had the same problem with a FTDI USB to serial converter. I use an oscilloscope to see my IN and OUT data coming from the device and I could see that the computer was always slow to respond. By default, time latency on the device is set to 16 ms. I changed it to 2 ms and it makes a big difference. Go to your USB Serial Converter in the Device Manager and in the advanced settings, change Latency time to 2 ms. It should works. Try it.
Why do you Invoke() the call to DisplayResults?
This will push it to the MessageLoop, an unnecessary delay.
It would be better if DataReceived() pushed data onto a (thread-safe) queue for decoupled processing.
I also think you could run into problems with split packages.
You could try setting ReceivedBytesThreshold = 22, which will result in the event being fired when there are at least 22 bytes to read. Note that it will be at least 22. There may be more.
I don't think I would personally do this though, because what happens if your packet size changes in the future, for example to 12 bytes? You would end up with 12 bytes in the buffer but not firing the event at all.
Far better to leave it set to 1, which will fire the event when at least 1 byte is available. Then push all received bytes into a list or a queue as Henk has already posted.
Note that the DataReceivedEvent has no knowledge of what you consider a packet of data to be of course. It just fires when there are bytes available. It is up to the developer to assemble these bytes into a meaningful message or packet.
The problem lies in the received data handler.
I ran a separate thread with a while(true) loop and serial.ReadLine(), all works perfectly.
using System.Threading;
Thread readThread = new Thread(Read);
readThread.Start();
Hope someone else doesn't need to spend 3 hours fixing this.

MSMQ Receive() method timeout

My original question from a while ago is MSMQ Slow Queue Reading, however I have advanced from that and now think I know the problem a bit more clearer.
My code (well actually part of an open source library I am using) looks like this:
queue.Receive(TimeSpan.FromSeconds(10), MessageQueueTransactionType.Automatic);
Which is using the Messaging.MessageQueue.Receive function and queue is a MessageQueue. The problem is as follows.
The above line of code will be called with the specified timeout (10 seconds). The Receive(...) function is a blocking function, and is supposed to block until a message arrives in the queue at which time it will return. If no message is received before the timeout is hit, it will return at the timeout. If a message is in the queue when the function is called, it will return that message immediately.
However, what is happening is the Receive(...) function is being called, seeing that there is no message in the queue, and hence waiting for a new message to come in. When a new message comes in (before the timeout), it isn't detecting this new message and continues waiting. The timeout is eventually hit, at which point the code continues and calls Receive(...) again, where it picks up the message and processes it.
Now, this problem only occurs after a number of days/weeks. I can make it work normally again by deleting & recreating the queue. It happens on different computers, and different queues. So it seems like something is building up, until some point when it breaks the triggering/notification ability that the Receive(...) function uses.
I've checked a lot of different things, and everything seems normal & isn't different from a queue that is working normally. There is plenty of disk space (13gig free) and RAM (about 350MB free out of 1GB from what I can tell). I have checked registry entries which all appear the same as other queues, and the performance monitor doesn't show anything out of the normal. I have also run the TMQ tool and can't see anything noticably wrong from that.
I am using Windows XP on all the machines and they all have service pack 3 installed. I am not sending a large amount of messages to the queues, at most it would be 1 every 2 seconds but generally a lot less frequent than that. The messages are only small too and nowhere near the 4MB limit.
The only thing I have just noticed is the p0000001.mq and r0000067.mq files in C:\WINDOWS\system32\msmq\storage are both 4,096KB however they are that size on other computers also which are not currently experiencing the problem. The problem does not happen to every queue on the computer at once, as I can recreate 1 problem queue on the computer and the other queues still experience the problem.
I am not very experienced with MSMQ so if you post possible things to check can you please explain how to check them or where I can find more details on what you are talking about.
Currently the situation is:
ComputerA - 4 queues normal
ComputerB - 2 queues experiencing problem, 1 queue normal
ComputerC - 2 queues experiencing problem
ComputerD - 1 queue normal
ComputerE - 2 queues normal
So I have a large number of computers/queues to compare and test against.
Any particular reason you aren't using an event handler to listen to the queues? The System.Messaging library allows you to attach a handler to a queue instead of, if I understand what you are doing correctly, looping Receive every 10 seconds. Try something like this:
class MSMQListener
{
public void StartListening(string queuePath)
{
MessageQueue msQueue = new MessageQueue(queuePath);
msQueue.ReceiveCompleted += QueueMessageReceived;
msQueue.BeginReceive();
}
private void QueueMessageReceived(object source, ReceiveCompletedEventArgs args)
{
MessageQueue msQueue = (MessageQueue)source;
//once a message is received, stop receiving
Message msMessage = null;
msMessage = msQueue.EndReceive(args.AsyncResult);
//do something with the message
//begin receiving again
msQueue.BeginReceive();
}
}
We are also using NServiceBus and had a similar problem inside our network.
Basically, MSMQ is using UDP with two-phase commits. After a message is received, it has to be acknowledged. Until it is acknowledged, it cannot be received on the client side as the receive transaction hasn't been finalized.
This was caused by different things in different times for us:
once, this was due to the Distributed Transaction Coordinator unable to communicate between machines as firewall misconfiguration
another time, we were using cloned virtual machines without sysprep which made internal MSMQ ids non-unique and made it receive a message to one machine and ack to another. Eventually, MSMQ figures things out but it takes quite a while.
Try this
public Message Receive( TimeSpan timeout, Cursor cursor )
overloaded function.
To get a cursor for a MessageQueue, call the CreateCursor method for that queue.
A Cursor is used with such methods as Peek(TimeSpan, Cursor, PeekAction) and Receive(TimeSpan, Cursor) when you need to read messages that are not at the front of the queue. This includes reading messages synchronously or asynchronously. Cursors do not need to be used to read only the first message in a queue.
When reading messages within a transaction, Message Queuing does not roll back cursor movement if the transaction is aborted. For example, suppose there is a queue with two messages, A1 and A2. If you remove message A1 while in a transaction, Message Queuing moves the cursor to message A2. However, if the transaction is aborted for any reason, message A1 is inserted back into the queue but the cursor remains pointing at message A2.
To close the cursor, call Close.
If you want to use something completely synchronous and without event you can test this method
public object Receive(string path, int millisecondsTimeout)
{
var mq = new System.Messaging.MessageQueue(path);
var asyncResult = mq.BeginReceive();
var handles = new System.Threading.WaitHandle[] { asyncResult.AsyncWaitHandle };
var index = System.Threading.WaitHandle.WaitAny(handles, millisecondsTimeout);
if (index == 258) // Timeout
{
mq.Close();
return null;
}
var result = mq.EndReceive(asyncResult);
return result;
}

Categories