I have this block of code:
var client = new TcpClient();
HttpRequestInfo.AddTimestamp("Connecting");
await Task.WhenAny(client.ConnectAsync(serverAddress, serverPort),
Task.Delay(TimeSpan.FromMilliseconds(300)));
HttpRequestInfo.AddTimestamp("Connected");
if(client.Connected){ ... }
Where HttpRequestInfo.AddTimestamp simply logs named timestamps with Stopwatch class.
In logs I sometimes see:
"Connecting":110ms - "Connected":747ms
"Connecting":35ms - "Connected":3120ms
"Connecting":38ms - "Connected":3053ms
I assumed that this approach will give me the opportunity to limit the connection by timeout (300ms). However, I see that this line of code sometimes (very rarely) runs longer than 300 ms.
What is the reason for this behavior?
The docs states:
This method depends on the system clock. This means that the time
delay will approximately equal the resolution of the system clock if
the delay argument is less than the resolution of the system clock,
which is approximately 15 milliseconds on Windows systems.
So it can explain the longer timeouts if they are approximately 15 milliseconds more than 300 milliseconds, because the delay will have to adjust itself to the system clock resolution.
It does not explain your longer timeouts that are in a larger scale.
I assume that for some reason ConnectAsync may block for a while before returning to the calling method, if it is true it will take time between your first log and when you actually fire Task.Delay and the problem is not related to the delay at all.
You can try this code and monitor the logs, maybe the lost time is hiding when launching ConnectAsync:
var client = new TcpClient();
HttpRequestInfo.AddTimestamp("Launching ConnectAsync");
var connectAsyncTask = client.ConnectAsync(serverAddress, serverPort);
HttpRequestInfo.AddTimestamp("ConnectAsync launched");
HttpRequestInfo.AddTimestamp("Launching Delay");
var delayTask= Task.Delay(TimeSpan.FromMilliseconds(300));
HttpRequestInfo.AddTimestamp("Delay launched");
var firstTask = await Task.WhenAny(connectAsyncTask, delayTask);
if(firstTask == connectAsyncTask)
{
HttpRequestInfo.AddTimestamp("Connected");
}
else
{
HttpRequestInfo.AddTimestamp("Timeout");
}
Related
So basically I am running a program which is able to send up to 7,000 HTTP requests every second in average, 24/7, in order to detect last changes on a website as quickly as possible.
However, every 2.5 to 3 minutes in average, my program slowdowns for around 10-15 seconds and goes from ~7K rq/s to less than 1000.
Here are logs from my program, where you can see the amount of requests it sends every second:
https://pastebin.com/029VLxZG
When scrolling down through the logs, you can see it goes slower every ~3 minutes. Example: https://i.imgur.com/US0wPzm.jpeg
At first I thought it was my server's ethernet connection going in a temporary "restricted" mode, and I even tried contacting my host about it. But then I ran 2 instances of my program simulteanously just to see what would happen and I noticed that, even though the issue (downtime) was happening on both, it wasn't always happening at the same time (depending on when the program was started, if you get what I mean), which meant the problem wasn't coming from the internet connection, but my program itself.
I investigated a little bit more, and found out that as soon as my program goes from ~7K rq/s to ~700, a lot of RAM is being freed up on my server.
I have taken 2 screenshots of the consecutive seconds before and once the downtime occurs (including RAM metrics), to compare, and you can view them here: https://imgur.com/a/sk2TYQZ (please note that I was using less threads here, which is why the average "normal" speed is ~2K rq/s instead of ~7K as mentioned before)
If you'd like to see more of it, here is the full record of the issue, in a video which lasts about 40 seconds: https://i.imgur.com/z27FlVP.mp4 - As you can see, after the RAM is freed up, its usage slowly goes up again, before the same process repeats every ~3 minutes.
For more context, here is the method I am using to send the HTTP requests (it is being called from a lot of threads concurrently, as my app is multi-threaded in order to be super fast):
public static async Task<bool> HasChangedAsync(string endpoint, HttpClient httpClient)
{
const string baseAddress = "https://example.com/";
string response = await httpClient.GetStringAsync(baseAddress + endpoint);
return response.Contains("example");
}
One thing I did is I tried replacing the whole method by await Task.Delay(25) then return false, and that fixed the issue, RAM usage was barely increasing.
This lead me to believe the issue is HttpClient / my HTTP requests, and even though I tried replacing the GetStringAsync method by GetAsync using both a HttpRequestMessage and HttpResponseMessage (and disposing them with using), the behavior ended up being the exact same.
So here I am, desperate for a fix, and without enough knowledge about memory, garbage collector etc (if that's even needed here) to be able to fix this myself.
Please, Stack Overflow, do you have any idea?
Thanks a lot.
Your best bet would be to stream the response and then use chunks of it to find what your are looking for. An example implementation could be something as follows:
using var response = await Client.GetAsync(BaseUrl, HttpCompletionOption.ResponseHeadersRead);
await using var stream = await response.Content.ReadAsStreamAsync();
using var reader = new StreamReader(stream);
string line = null;
while ((line = await reader.ReadLineAsync()) != null)
{
if(line.Contains("example"))// do whatever
}
To keep track of performance in our software we measure the duration of calls we are interested in.
for example:
using(var performanceTrack = new PerformanceTracker("pt-1"))
{
// do some stuff
CallAnotherMethod();
using(var anotherPerformanceTrack = new PerformanceTracker("pt-1a"))
{
// do stuff
// .. do something
}
using(var anotherPerformanceTrackb = new PerformanceTracker("pt-1b"))
{
// do stuff
// .. do something
}
// do more stuff
}
This will result in something like:
pt-1 [----------------------------] 28ms
[--] 2ms from another method
pt-1a [-----------] 11ms
pt-1b [-------------] 13ms
In the constructor of PerformanceTracker I start a stopwatch. (As far as I know it's the most reliable way to measure a duration.) In the dispose method I stop the stopwatch and save the results to application insights.
I have noticed a lot of fluctation between the results. To solve this I've already done the following:
Run in release built, outside of visual studio.
Warm up call first, not included in to the statistics.
Before every call (total 75 calls) I call the garbage collector.
After this the fluctation is less, but still not very accurate. For example I have run my test set twice. Both times
See here the results in milliseconds.
Avg: 782.946666666667 981.68
Min: 489 vs 513
Max: 2600 vs 4875
stdev: 305.854933523003 vs 652.343471128764
sampleSize: 75 vs 75
Why is the performance measurement with the stopwatch still giving a lot of variation in the results? I found on SO (https://stackoverflow.com/a/16157458/1408786) that I should maybe add the following to my code:
//prevent the JIT Compiler from optimizing Fkt calls away
long seed = Environment.TickCount;
//use the second Core/Processor for the test
Process.GetCurrentProcess().ProcessorAffinity = new IntPtr(2);
//prevent "Normal" Processes from interrupting Threads
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
//prevent "Normal" Threads from interrupting this thread
Thread.CurrentThread.Priority = ThreadPriority.Highest;
But the problem is, we have a lot of async code. How can I get a reliable performance track in the code? My aim is to discover performance degradation when for example after a check in a method is 10ms slower than before...
So I'm having a really strange behavior with a c# task delay that is kind of making me insane.
Context: I'm using C# .net to communicate with one of our devices via R4852. The device needs roughly 200ms to finish each command so I introduced a 250ms delay inside my communication class.
Bug / bad behavior: The delay inside my communication class sometimes waits for 250ms and sometimes only waits for 125ms. This is reproducible and the same behavior occurs when I'm increasing my delay. E.g. if I set my delay to 1000ms every second request will only wait for 875ms, so again there are 125ms missing.
This behavior only occurs if there is no debugger attached and only occurs on some machines. The machine where this software will be used in our production department is having this issue, my machine that I'm working on right now doesn't have this issue. Both are running Windows 10.
How come that there are 125ms missing from time to time?
I already learnt that the Task.Delay method is using a timer with a precision of 15ms. This doesn't explain the missing 125ms as it at most should fire a few milliseconds too late instead of 125m too early.
The following method is the one I use to queue commands to my device. There is a semaphore responsible so that only one command can be executed at a time (_requestSemapohre) so there can only ever be one request being processed.
public async Task<bool> Request(WriteRequest request)
{
await _requestSemaphore.WaitAsync(); // block incoming calls
await Task.Delay(Delay); // delay
Write(_connectionIdDictionary[request.Connection], request.Request); // write
if (request is WriteReadRequest)
{
_currentRequest = request as WriteReadRequest;
var readSuccess = await _readSemaphore.WaitAsync(Timeout); // wait until read of line has finished
_currentRequest = null; // set _currentRequest to null
_requestSemaphore.Release(); // release next incoming call
if (!readSuccess)
{
return false;
}
else
{
return true;
}
}
else
{
if (request is WriteWithDelayRequest)
{
await Task.Delay((request as WriteWithDelayRequest).Delay);
}
_requestSemaphore.Release(); // release next incoming call
return true;
}
}
The following code is part of the method that is sending the requests to the method above. I removed some lines to keep it short. The basic stuff (requesting and waiting) is still there
// this command is the first command and will always have a proper delay of 1000ms
var request = new Communication.Requests.WriteRequest(item.Connection, item.Command);
await _translator.Request(request);
// this request is the second request that is missing 125ms
var queryRequest = new Communication.Requests.WriteReadRequest(item.Connection, item.Query); // query that is being sent to check if the value has been sent properly
if (await _translator.Request(queryRequest)) // send the query to the device and wait for response
{
if (item.IsQueryValid(queryRequest.Response)) // check result
{
item.Success = true;
}
}
The first request that I'm sending to this method is a WriteRequest, the second one a WriteReadRequest.
I discovered this behavior when looking at the serial port communication using a software named Device Monitoring Studio to monitor the serial communication.
Here is a screenshot of the actual serial communication. In this case I was using a delay of 1000ms. You can see that the sens0002 command had a delay of exactly 1 second before it was executed. The next command / query sens?only has a 875ms delay. This screenshot was taken while the debugger was not attached.
Here is another screenshot. The delay was set to 1000ms again but this time the debugger was attached. As you can see the first and second command now both have a delay of roughly 1000ms.
And in the two following screenshots you can see the same behavior with a delay of 250ms (bugged down to 125ms). First screenshot without debugger attached, second one with debugger attached. In the second screenshot you can also see that there is quiet the drift of 35ms but still nowhere close to the 125ms that were missing before.
So what the hell am I looking at here? The quick and dirty solution would be to just increase the delay to 1000ms so that this won't be an issue anymore but I'd rather understand why this issue occurs and how to fix it properly.
Cheers!
As far as I can see, your times are printed as delta to the prev. entry.
In case of the 125/875ms you have 8 intermediate entries with each roughly 15ms (sum roughly 120ms)
In case of 250/1000ms you have 8 intermediate entries with each roughly 5ms (sum roughly 40ms) and the numbers are actually more like 215/960ms.
So, if you add those intermediate delays, the resulting complete delay is roughly the same as far as I can tell.
Answering the question for everyone who just wants a yes / no on the question title: The First Rule of Programming: It's Always Your Fault
It's save to assume, that Task.Delay covers at least the specified amount of time (might be more due to clock resolution). So if it seems to cover a smaller timespan, then the method used to test the actual delay is faulty somehow.
I'm calling a Web Service via SOAP. When the web service is down, we'd like it to fail rather quickly rather than wait the default timeout. I use the Timeout property for this:
service.Timeout = 5000;
which I think should time-out the operation after 5 seconds. However, I see that the operation doesn't time out until after 23 seconds, the same amount of time as the default timeout (ie, the above line not present).
I see that the exception thrown is "The operation has timed out", I just can't understand why it is not timing out in the time I've specified. What am I doing wrong?
Edit:
Here's the test program:
long start = Environment.TickCount;
try {
mdDqwsStatus.Service service = new mdDqwsStatus.Service();
service.Timeout = 5000; // 5 sec
string response = service.GetServiceStatus(customerID, pafID); // This calls the WS
long end1 = Environment.TickCount - start; // I never hit this line
} catch (Exception ex) {
long end2 = Environment.TickCount - start; // Failure goes to here
OutputError(Ex);
}
I set a breakpoint on the OutputError line and look at end2, and see 23000+ milliseconds.
Note that in the field, if the Web Service (which runs on IIS) has been stopped, the delay is quite short. However, if the machine is down or there's connectivity issues, the delay is 23 seconds (or sometimes even quite longer).
My first post here, but this site has answered many questions that I have had in the past. Hopefully I can give enough detail to explain the issue I am facing as I don't fully understand how all .NET is handling the threads I create!
OK so basically, I have a thread set to run every 1000ms which gets a frame counter from a video encoder and calculate the FPS. Accuracy is sufficient with a System.Threading.Timer for now though I realise it isn't accurate (often over 1000ms between events). I also have another Threading.Timer which is running and taking a reading from a network to serial device. The issue is that if the network device becomes unavailable and the socket timesout on that timer the FPS timers go completely out of sync! So they were previously executing every 1015ms (measured) but when I start this other Thread.Timer trying to make a socket connection and it fails it causes the FPS counter timers to go totally off (up to 7000ms!!). Am not quite sure why this should be and really need the FPS counter to run once a second pretty much no matter what.
Bit of code ->
FPS Counter
private void getFPS(Object stateInfo)//Run once per second
{
int frames = AxisMediaControl.getFrames; //Axis Encoder media control
int fps = frames - prevValue;
prevValue = frames;
setFPSBar(fps, fps_color); //Delegate to update progress bar for FPS
}
Battery Level Timer
while (isRunning)
{
if (!comm.Connected) //comm is standard socket client
comm.Connect(this.ip_address, this.port); //Timeout here causes other timer threads to go out of sync
if (comm.Connected)
{
decimal reading = comm.getBatt_Level();
//Calculate Readings and update GUI
Console.Out.WriteLine("Reading = " + (int)prog);
break;//Debug
}
This is the code used to connect to the socket currently ->
public Socket mSocket { get; set; }
public bool Connect(IPAddress ip_address, UInt16 port)
{
try
{
mSocket.Connect(ip_address, port);
}
catch(Exception ex)
{
}
return mSocket.Connected;
}
Hopefully not too ambiguous!
While I don't know why your FPS timer is not called for 7s, I can suggest a workaround: Measure the TimeSpan since the last time the FPS value was updated by remembering the Environment.TickCount value. Then, calculate the FPS value as (delta_frames / delta_t).
Thanks for the comments, I fixed it by doing the following.
Used a System.Timers.Timer instead and set auto-reset to false. Each time one of the timers completes I start it again, this means there is only ever one timer for each battery device. The problem with the initial solution is that the network timeout was causing the threads to stay alive for much longer than the timer interval. Thus, to ensure the timer interval was met a new thread was spawned more frequently.
During runtime this meant there was about 5-7 threads for each battery timer (whereby 6 are timing out and 1 is about to begin). Changing to the new timer means there is only one thread now as it should be.
I also added in the code to calculate the FPS based on the time taken (using Stopwatch function for higher accuracy (thanks USR)). Thanks for the help. I will have to make sure not to just leave exceptions blank too.