System.Threading.Timer causing other Timers to fall behind - c#

My first post here, but this site has answered many questions that I have had in the past. Hopefully I can give enough detail to explain the issue I am facing as I don't fully understand how all .NET is handling the threads I create!
OK so basically, I have a thread set to run every 1000ms which gets a frame counter from a video encoder and calculate the FPS. Accuracy is sufficient with a System.Threading.Timer for now though I realise it isn't accurate (often over 1000ms between events). I also have another Threading.Timer which is running and taking a reading from a network to serial device. The issue is that if the network device becomes unavailable and the socket timesout on that timer the FPS timers go completely out of sync! So they were previously executing every 1015ms (measured) but when I start this other Thread.Timer trying to make a socket connection and it fails it causes the FPS counter timers to go totally off (up to 7000ms!!). Am not quite sure why this should be and really need the FPS counter to run once a second pretty much no matter what.
Bit of code ->
FPS Counter
private void getFPS(Object stateInfo)//Run once per second
{
int frames = AxisMediaControl.getFrames; //Axis Encoder media control
int fps = frames - prevValue;
prevValue = frames;
setFPSBar(fps, fps_color); //Delegate to update progress bar for FPS
}
Battery Level Timer
while (isRunning)
{
if (!comm.Connected) //comm is standard socket client
comm.Connect(this.ip_address, this.port); //Timeout here causes other timer threads to go out of sync
if (comm.Connected)
{
decimal reading = comm.getBatt_Level();
//Calculate Readings and update GUI
Console.Out.WriteLine("Reading = " + (int)prog);
break;//Debug
}
This is the code used to connect to the socket currently ->
public Socket mSocket { get; set; }
public bool Connect(IPAddress ip_address, UInt16 port)
{
try
{
mSocket.Connect(ip_address, port);
}
catch(Exception ex)
{
}
return mSocket.Connected;
}
Hopefully not too ambiguous!

While I don't know why your FPS timer is not called for 7s, I can suggest a workaround: Measure the TimeSpan since the last time the FPS value was updated by remembering the Environment.TickCount value. Then, calculate the FPS value as (delta_frames / delta_t).

Thanks for the comments, I fixed it by doing the following.
Used a System.Timers.Timer instead and set auto-reset to false. Each time one of the timers completes I start it again, this means there is only ever one timer for each battery device. The problem with the initial solution is that the network timeout was causing the threads to stay alive for much longer than the timer interval. Thus, to ensure the timer interval was met a new thread was spawned more frequently.
During runtime this meant there was about 5-7 threads for each battery timer (whereby 6 are timing out and 1 is about to begin). Changing to the new timer means there is only one thread now as it should be.
I also added in the code to calculate the FPS based on the time taken (using Stopwatch function for higher accuracy (thanks USR)). Thanks for the help. I will have to make sure not to just leave exceptions blank too.

Related

Task.Delay in .net fires 125ms early

So I'm having a really strange behavior with a c# task delay that is kind of making me insane.
Context: I'm using C# .net to communicate with one of our devices via R4852. The device needs roughly 200ms to finish each command so I introduced a 250ms delay inside my communication class.
Bug / bad behavior: The delay inside my communication class sometimes waits for 250ms and sometimes only waits for 125ms. This is reproducible and the same behavior occurs when I'm increasing my delay. E.g. if I set my delay to 1000ms every second request will only wait for 875ms, so again there are 125ms missing.
This behavior only occurs if there is no debugger attached and only occurs on some machines. The machine where this software will be used in our production department is having this issue, my machine that I'm working on right now doesn't have this issue. Both are running Windows 10.
How come that there are 125ms missing from time to time?
I already learnt that the Task.Delay method is using a timer with a precision of 15ms. This doesn't explain the missing 125ms as it at most should fire a few milliseconds too late instead of 125m too early.
The following method is the one I use to queue commands to my device. There is a semaphore responsible so that only one command can be executed at a time (_requestSemapohre) so there can only ever be one request being processed.
public async Task<bool> Request(WriteRequest request)
{
await _requestSemaphore.WaitAsync(); // block incoming calls
await Task.Delay(Delay); // delay
Write(_connectionIdDictionary[request.Connection], request.Request); // write
if (request is WriteReadRequest)
{
_currentRequest = request as WriteReadRequest;
var readSuccess = await _readSemaphore.WaitAsync(Timeout); // wait until read of line has finished
_currentRequest = null; // set _currentRequest to null
_requestSemaphore.Release(); // release next incoming call
if (!readSuccess)
{
return false;
}
else
{
return true;
}
}
else
{
if (request is WriteWithDelayRequest)
{
await Task.Delay((request as WriteWithDelayRequest).Delay);
}
_requestSemaphore.Release(); // release next incoming call
return true;
}
}
The following code is part of the method that is sending the requests to the method above. I removed some lines to keep it short. The basic stuff (requesting and waiting) is still there
// this command is the first command and will always have a proper delay of 1000ms
var request = new Communication.Requests.WriteRequest(item.Connection, item.Command);
await _translator.Request(request);
// this request is the second request that is missing 125ms
var queryRequest = new Communication.Requests.WriteReadRequest(item.Connection, item.Query); // query that is being sent to check if the value has been sent properly
if (await _translator.Request(queryRequest)) // send the query to the device and wait for response
{
if (item.IsQueryValid(queryRequest.Response)) // check result
{
item.Success = true;
}
}
The first request that I'm sending to this method is a WriteRequest, the second one a WriteReadRequest.
I discovered this behavior when looking at the serial port communication using a software named Device Monitoring Studio to monitor the serial communication.
Here is a screenshot of the actual serial communication. In this case I was using a delay of 1000ms. You can see that the sens0002 command had a delay of exactly 1 second before it was executed. The next command / query sens?only has a 875ms delay. This screenshot was taken while the debugger was not attached.
Here is another screenshot. The delay was set to 1000ms again but this time the debugger was attached. As you can see the first and second command now both have a delay of roughly 1000ms.
And in the two following screenshots you can see the same behavior with a delay of 250ms (bugged down to 125ms). First screenshot without debugger attached, second one with debugger attached. In the second screenshot you can also see that there is quiet the drift of 35ms but still nowhere close to the 125ms that were missing before.
So what the hell am I looking at here? The quick and dirty solution would be to just increase the delay to 1000ms so that this won't be an issue anymore but I'd rather understand why this issue occurs and how to fix it properly.
Cheers!
As far as I can see, your times are printed as delta to the prev. entry.
In case of the 125/875ms you have 8 intermediate entries with each roughly 15ms (sum roughly 120ms)
In case of 250/1000ms you have 8 intermediate entries with each roughly 5ms (sum roughly 40ms) and the numbers are actually more like 215/960ms.
So, if you add those intermediate delays, the resulting complete delay is roughly the same as far as I can tell.
Answering the question for everyone who just wants a yes / no on the question title: The First Rule of Programming: It's Always Your Fault
It's save to assume, that Task.Delay covers at least the specified amount of time (might be more due to clock resolution). So if it seems to cover a smaller timespan, then the method used to test the actual delay is faulty somehow.

Chip-8 Emulator: slow down clock speed

I'm planning to write a nes emulator. But first, to understand how emulation works, I'll write a Chip-8 emulator.
The emulator is nearly finished. I've some bugs in games, but this will be fixed soon.
My problem number 1 is to synchronize the emulator with the clock speed of the Chip-8.
In the internet I've often read, that the general clock speed should be ~ 540Hz. The timers of the chip should be ticked at a frequenz of 60Hz.
To synchronize my emulator with the Chip-8 I've written follow logic:
private void GameTick()
{
Stopwatch watch = new Stopwatch();
var instructionCount = 0;
_gameIsRunning = true;
while (_gameIsRunning)
{
watch.Restart();
EmulateCycle();
//Updates the internal timer at a 60hz frequenz
//540hz (game tick) divided by 9 equals 60hz (timer tick)
instructionCount++;
if(instructionCount == 9)
{
UpdateSoundAndDelay();
instructionCount = 0;
}
if (_readyToDraw)
{
DrawGraphics();
_readyToDraw = false;
}
SetKeys();
//Pause the game to get a virtual clock speed of ca. 540mhz
var elapsedMicroseconds = watch.ElapsedTicks / (Stopwatch.Frequency / (1000L * 1000L));
while(elapsedMicroseconds < 1852)
{
elapsedMicroseconds = watch.ElapsedTicks / (Stopwatch.Frequency / (1000L * 1000L));
}
}
}
For more detailed information look at my repo: https://github.com/Marcel-Hoffmann/Chip-8-Emulator
As you can see, for each cpu cycle, I'll wait for 1852 microseconds. The result will be ~ 540 cycles in a second equals to 540Hz.
But I'm not very happy with this logic.
Has someone a better Idea, how to synchronize the clock speed?
This is the typical approach, and has many drawbacks - most notably, unnecessary CPU usage and potentially scheduling issues (your application will be seen as 100% CPU beast, so other applications might get their thread quanta before you under load).
A better approach would use a sleep instead - however, by default, the system timer has nowhere near the frequency to accommodate a wait that's less than 2ms. So if you want to use a sleep, you'll need to change the system timer. This is a bit tricky on older Windows (it's a system-wide setting and has noticeable impact on other applications and general CPU usage), but even in that case, it's better than a "busy loop" - as long as you restore the system settings afterwards. On Windows 8 (and to some extent, 7 and Vista), the timer is asynchronous and no longer requires a busy loop, so it's a lot easier to have higher timer resolution.
The system timer APIs are not exposed by .NET, so you'll need to use P/Invokes (timeBeginPeriod and timeEndPeriod for the old-style API). If this isn't available, you can always fall back to your busy loop :)

Does C# Stopwatch pause when my computer goes to sleep? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I'm measuring the runtime of my program (written in C#) with Stopwatch. My computer went sleep, and I don't know that the time data it shows is correct or not. So is Stopwatch measuring the sleep time too or it was paused then continued?
According to source code for StopWatch the method Start looks like this:
public void Start() {
// Calling start on a running Stopwatch is a no-op.
if(!isRunning) {
startTimeStamp = GetTimestamp();
isRunning = true;
}
}
The method Stop looks like this:
public void Stop() {
// Calling stop on a stopped Stopwatch is a no-op.
if( isRunning) {
long endTimeStamp = GetTimestamp();
long elapsedThisPeriod = endTimeStamp - startTimeStamp;
elapsed += elapsedThisPeriod;
isRunning = false;
if (elapsed < 0) {
// When measuring small time periods the StopWatch.Elapsed*
// properties can return negative values. This is due to
// bugs in the basic input/output system (BIOS) or the hardware
// abstraction layer (HAL) on machines with variable-speed CPUs
// (e.g. Intel SpeedStep).
elapsed = 0;
}
}
}
So the answer of your question is: It measures the duration by using two timestamps which leads to the conclusion that it doesn't matter whether your computer goes to sleep or not.
Update (thanks Mike and Joe):
However, if your computer is sleeping, it cannot run your program - so the measured duration would be the sum of the duration the program has been running and the duration where the computer has been sleeping.
TotalDuration = CalculationDuration + SleepDuration.
The stopwatch does not pause when the computer enters sleep.
It uses the Windows API QueryPerformanceCounter() function, which does not reset the count when the computer goes to sleep:
"QueryPerformanceCounter reads the performance counter and returns the
total number of ticks that have occurred since the Windows operating
system was started, including the time when the machine was in a sleep
state such as standby, hibernate, or connected standby."

How do I fire an event every 20ms *on average*? Inbuilt timer "creeps"

I'm writing an RTP server to publish PCMA wave files. It needs to pump data every 20ms (on average - it can be a bit either side of that for any 1 pump, but must average out at 20ms).
My current implementation uses a Timer, but then the event fires just over every 20 ms, so it gradually drifts out.
Is there a better way to do this? The only way I can currently think of is to dynamically adjust the timer inteval as it starts to creep, in order to bring it back in line.
Sample Code
void Main()
{
System.Timers.Timer timer = new System.Timers.Timer();
// Use a stopwatch to measure the "wall-clock" elapsed time.
Stopwatch sw = new Stopwatch();
sw.Start();
timer.Elapsed += (sender, args) =>
{
Console.WriteLine(sw.ElapsedMilliseconds);
// Simulate doing some work here -
// in real life this would be pumping data via UDP.
Thread.Sleep(300);
};
timer.AutoReset = true;
// I'm using an interval of 1 second here as it better
// illustrates the problem
timer.Interval = 1000;
timer.Start();
}
Output:
1002
2001
3002
4003
5003
6005
7006
8007
9007
10017
11018
12019
13019
14020 <-- By this point we have creeped over 20 ms in just 14 iterations :(
First of all: I will never get it to be exact because your program will never be in full control of what the CPU's are doing as long as you are running on standard Windows because it is not a real-time OS. Just think of a anti virus kicking in, the Garbage Collector freezing your thread, playing a game on the side, ...
That said you might be able to compensate a bit.
When the handler kicks in, pause the timer, record the current time, act, update the time's interval by setting the interval to the required interval based upon the start of the handler and the time it has taken to act.
This way you can control the creeping better. An exception to that might be when the acting takes longer than the interval and whether the interval should contain the time to act or be the time between to acts.
In my experience you cannot rely on any timer to get an interval that small (20 ms) accurately but compensating for creep can help quite a bit.
You could use StopWatch to measure time, but it doesn't have callbacks.
You can use Windows multimedia timer. It involves some WinAPI, but all the details are provided in this article.

How do I obtain the latency between server and client in C#?

I'm working on a C# Server application for a game engine I'm writing in ActionScript 3. I'm using an authoritative server model as to prevent cheating and ensure fair game. So far, everything works well:
When the client begins moving, it tells the server and starts rendering locally; the server, then, tells everyone else that client X has began moving, among with details so they can also begin rendering. When the client stops moving, it tells the server, which performs calculations based on the time the client began moving and the client render tick delay and replies to everyone, so they can update with the correct values.
The thing is, when I use the default 20ms tick delay on server calculations, when the client moves for a rather long distance, there's a noticeable tilt forward when it stops. If I increase slightly the delay to 22ms, on my local network everything runs very smoothly, but in other locations, the tilt is still there. After experimenting a little, I noticed that the extra delay needed is pretty much tied to the latency between client and server. I even boiled it down to a formula that would work quite nicely: delay = 20 + (latency / 10).
So, how would I proceed to obtain the latency between a certain client and the server (I'm using asynchronous sockets). The CPU effort can't be too much, as to not have the server run slowly. Also, is this really the best way, or is there a more efficient/easier way to do this?
Sorry that this isn't directly answering your question, but generally speaking you shouldn't rely too heavily on measuring latency because it can be quite variable. Not only that, you don't know if the ping time you measure is even symmetrical, which is important. There's no point applying 10ms of latency correction if it turns out that the ping time of 20ms is actually 19ms from server to client and 1ms from client to server. And latency in application terms is not the same as in networking terms - you may be able to ping a certain machine and get a response in 20ms but if you're contacting a server on that machine that only processes network input 50 times a second then your responses will be delayed by an extra 0 to 20ms, and this will vary rather unpredictably.
That's not to say latency measurement it doesn't have a place in smoothing predictions out, but it's not going to solve your problem, just clean it up a bit.
On the face of it, the problem here seems to be that that you're sent information in the first message which you use to extrapolate data from until the last message is received. If all else stays constant then the movement vector given in the first message multiplied by the time between the messages will give the server the correct end position that the client was in at roughly now-(latency/2). But if the latency changes at all, the time between the messages will grow or shrink. The client may know he's moved 10 units, but the server simulated him moving 9 or 11 units before being told to snap him back to 10 units.
The general solution to this is to not assume that latency will stay constant but to send periodic position updates, which allow the server to verify and correct the client's position. With just 2 messages as you have now, all the error is found and corrected after the 2nd message. With more messages, the error is spread over many more sample points allowing for smoother and less visible correction.
It can never be perfect though: all it takes is a lag spike in the last millisecond of movement and the server's representation will overshoot. You can't get around that if you're predicting future movement based on past events, as there's no real alternative to choosing either correct-but-late or incorrect-but-timely since information takes time to travel. (Blame Einstein.)
One thing to keep in mind when using ICMP based pings is that networking equipment will often give ICMP traffic lower priority than normal packets, especially when the packets cross network boundaries such as WAN links. This can lead to pings being dropped or showing higher latency than traffic is actually experiencing and lends itself to being an indicator of problems rather than a measurement tool.
The increasing use of Quality of Service (QoS) in networks only exacerbates this and as a consequence though ping still remains a useful tool, it needs to be understood that it may not be a true reflection of the network latency for non-ICMP based real traffic.
There is a good post at the Itrinegy blog How do you measure Latency (RTT) in a network these days? about this.
You could use the already available Ping Class. Should be preferred over writing your own IMHO.
Have a "ping" command, where you send a message from the server to the client, then time how long it takes to get a response. Barring CPU overload scenarios, it should be pretty reliable. To get the one-way trip time, just divide the time by 2.
We can measure the round-trip time using the Ping class of the .NET Framework.
Instantiate a Ping and subscribe to the PingCompleted event:
Ping pingSender = new Ping();
pingSender.PingCompleted += PingCompletedCallback;
Add code to configure and action the ping.
Our PingCompleted event handler (PingCompletedEventHandler) has a PingCompletedEventArgs argument. The PingCompletedEventArgs.Reply gets us a PingReply object. PingReply.RoundtripTime returns the round trip time (the "number of milliseconds taken to send an Internet Control Message Protocol (ICMP) echo request and receive the corresponding ICMP echo reply message"):
public static void PingCompletedCallback(object sender, PingCompletedEventArgs e)
{
...
Console.WriteLine($"Roundtrip Time: {e.Reply.RoundtripTime}");
...
}
Code-dump of a full working example, based on MSDN's example. I have modified it to write the RTT to the console:
public static void Main(string[] args)
{
string who = "www.google.com";
AutoResetEvent waiter = new AutoResetEvent(false);
Ping pingSender = new Ping();
// When the PingCompleted event is raised,
// the PingCompletedCallback method is called.
pingSender.PingCompleted += PingCompletedCallback;
// Create a buffer of 32 bytes of data to be transmitted.
string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
byte[] buffer = Encoding.ASCII.GetBytes(data);
// Wait 12 seconds for a reply.
int timeout = 12000;
// Set options for transmission:
// The data can go through 64 gateways or routers
// before it is destroyed, and the data packet
// cannot be fragmented.
PingOptions options = new PingOptions(64, true);
Console.WriteLine("Time to live: {0}", options.Ttl);
Console.WriteLine("Don't fragment: {0}", options.DontFragment);
// Send the ping asynchronously.
// Use the waiter as the user token.
// When the callback completes, it can wake up this thread.
pingSender.SendAsync(who, timeout, buffer, options, waiter);
// Prevent this example application from ending.
// A real application should do something useful
// when possible.
waiter.WaitOne();
Console.WriteLine("Ping example completed.");
}
public static void PingCompletedCallback(object sender, PingCompletedEventArgs e)
{
// If the operation was canceled, display a message to the user.
if (e.Cancelled)
{
Console.WriteLine("Ping canceled.");
// Let the main thread resume.
// UserToken is the AutoResetEvent object that the main thread
// is waiting for.
((AutoResetEvent)e.UserState).Set();
}
// If an error occurred, display the exception to the user.
if (e.Error != null)
{
Console.WriteLine("Ping failed:");
Console.WriteLine(e.Error.ToString());
// Let the main thread resume.
((AutoResetEvent)e.UserState).Set();
}
Console.WriteLine($"Roundtrip Time: {e.Reply.RoundtripTime}");
// Let the main thread resume.
((AutoResetEvent)e.UserState).Set();
}
You might want to perform several pings and then calculate an average, depending on your requirements of course.

Categories