I'm playing around with sending messages through the Azure Service Bus library Azure.Messaging.ServiceBus. I'm following this tutorial and sending/processing a single message.
When calling processor.StopProcessingAsync(), the action takes about a minute (each single time). When looking in the Azure portal, all messaged processed. I have no clue why it takes so long for the processor the stop even though there are no messages on the queue.
It seems like it takes the (exact) same amount of time each time. If anyone could point me to why it takes such a long time and how to reduce it (configuration/setup?), I would be more than thankful. Thanks in advance!
Ok. While going through the source code, I found that there's a default "wait time" after a receiver was started which is 60 seconds. This can be lowered by setting TryTimeout on ServiceBusClientOptions.ServiceBusRetryOptions.
See:
ServiceBusRetryOptions
AmqpReceiver.ReceiveMessagesAsyncInternal
I tested this, it works as expected:
var clientOptions = new ServiceBusClientOptions();
clientOptions.RetryOptions.TryTimeout = new TimeSpan(0, 0, 5);
await using var client = new ServiceBusClient(connectionString, clientOptions);
sets the timeout to 5 seconds.
Related
So basically I am running a program which is able to send up to 7,000 HTTP requests every second in average, 24/7, in order to detect last changes on a website as quickly as possible.
However, every 2.5 to 3 minutes in average, my program slowdowns for around 10-15 seconds and goes from ~7K rq/s to less than 1000.
Here are logs from my program, where you can see the amount of requests it sends every second:
https://pastebin.com/029VLxZG
When scrolling down through the logs, you can see it goes slower every ~3 minutes. Example: https://i.imgur.com/US0wPzm.jpeg
At first I thought it was my server's ethernet connection going in a temporary "restricted" mode, and I even tried contacting my host about it. But then I ran 2 instances of my program simulteanously just to see what would happen and I noticed that, even though the issue (downtime) was happening on both, it wasn't always happening at the same time (depending on when the program was started, if you get what I mean), which meant the problem wasn't coming from the internet connection, but my program itself.
I investigated a little bit more, and found out that as soon as my program goes from ~7K rq/s to ~700, a lot of RAM is being freed up on my server.
I have taken 2 screenshots of the consecutive seconds before and once the downtime occurs (including RAM metrics), to compare, and you can view them here: https://imgur.com/a/sk2TYQZ (please note that I was using less threads here, which is why the average "normal" speed is ~2K rq/s instead of ~7K as mentioned before)
If you'd like to see more of it, here is the full record of the issue, in a video which lasts about 40 seconds: https://i.imgur.com/z27FlVP.mp4 - As you can see, after the RAM is freed up, its usage slowly goes up again, before the same process repeats every ~3 minutes.
For more context, here is the method I am using to send the HTTP requests (it is being called from a lot of threads concurrently, as my app is multi-threaded in order to be super fast):
public static async Task<bool> HasChangedAsync(string endpoint, HttpClient httpClient)
{
const string baseAddress = "https://example.com/";
string response = await httpClient.GetStringAsync(baseAddress + endpoint);
return response.Contains("example");
}
One thing I did is I tried replacing the whole method by await Task.Delay(25) then return false, and that fixed the issue, RAM usage was barely increasing.
This lead me to believe the issue is HttpClient / my HTTP requests, and even though I tried replacing the GetStringAsync method by GetAsync using both a HttpRequestMessage and HttpResponseMessage (and disposing them with using), the behavior ended up being the exact same.
So here I am, desperate for a fix, and without enough knowledge about memory, garbage collector etc (if that's even needed here) to be able to fix this myself.
Please, Stack Overflow, do you have any idea?
Thanks a lot.
Your best bet would be to stream the response and then use chunks of it to find what your are looking for. An example implementation could be something as follows:
using var response = await Client.GetAsync(BaseUrl, HttpCompletionOption.ResponseHeadersRead);
await using var stream = await response.Content.ReadAsStreamAsync();
using var reader = new StreamReader(stream);
string line = null;
while ((line = await reader.ReadLineAsync()) != null)
{
if(line.Contains("example"))// do whatever
}
So I'm having a really strange behavior with a c# task delay that is kind of making me insane.
Context: I'm using C# .net to communicate with one of our devices via R4852. The device needs roughly 200ms to finish each command so I introduced a 250ms delay inside my communication class.
Bug / bad behavior: The delay inside my communication class sometimes waits for 250ms and sometimes only waits for 125ms. This is reproducible and the same behavior occurs when I'm increasing my delay. E.g. if I set my delay to 1000ms every second request will only wait for 875ms, so again there are 125ms missing.
This behavior only occurs if there is no debugger attached and only occurs on some machines. The machine where this software will be used in our production department is having this issue, my machine that I'm working on right now doesn't have this issue. Both are running Windows 10.
How come that there are 125ms missing from time to time?
I already learnt that the Task.Delay method is using a timer with a precision of 15ms. This doesn't explain the missing 125ms as it at most should fire a few milliseconds too late instead of 125m too early.
The following method is the one I use to queue commands to my device. There is a semaphore responsible so that only one command can be executed at a time (_requestSemapohre) so there can only ever be one request being processed.
public async Task<bool> Request(WriteRequest request)
{
await _requestSemaphore.WaitAsync(); // block incoming calls
await Task.Delay(Delay); // delay
Write(_connectionIdDictionary[request.Connection], request.Request); // write
if (request is WriteReadRequest)
{
_currentRequest = request as WriteReadRequest;
var readSuccess = await _readSemaphore.WaitAsync(Timeout); // wait until read of line has finished
_currentRequest = null; // set _currentRequest to null
_requestSemaphore.Release(); // release next incoming call
if (!readSuccess)
{
return false;
}
else
{
return true;
}
}
else
{
if (request is WriteWithDelayRequest)
{
await Task.Delay((request as WriteWithDelayRequest).Delay);
}
_requestSemaphore.Release(); // release next incoming call
return true;
}
}
The following code is part of the method that is sending the requests to the method above. I removed some lines to keep it short. The basic stuff (requesting and waiting) is still there
// this command is the first command and will always have a proper delay of 1000ms
var request = new Communication.Requests.WriteRequest(item.Connection, item.Command);
await _translator.Request(request);
// this request is the second request that is missing 125ms
var queryRequest = new Communication.Requests.WriteReadRequest(item.Connection, item.Query); // query that is being sent to check if the value has been sent properly
if (await _translator.Request(queryRequest)) // send the query to the device and wait for response
{
if (item.IsQueryValid(queryRequest.Response)) // check result
{
item.Success = true;
}
}
The first request that I'm sending to this method is a WriteRequest, the second one a WriteReadRequest.
I discovered this behavior when looking at the serial port communication using a software named Device Monitoring Studio to monitor the serial communication.
Here is a screenshot of the actual serial communication. In this case I was using a delay of 1000ms. You can see that the sens0002 command had a delay of exactly 1 second before it was executed. The next command / query sens?only has a 875ms delay. This screenshot was taken while the debugger was not attached.
Here is another screenshot. The delay was set to 1000ms again but this time the debugger was attached. As you can see the first and second command now both have a delay of roughly 1000ms.
And in the two following screenshots you can see the same behavior with a delay of 250ms (bugged down to 125ms). First screenshot without debugger attached, second one with debugger attached. In the second screenshot you can also see that there is quiet the drift of 35ms but still nowhere close to the 125ms that were missing before.
So what the hell am I looking at here? The quick and dirty solution would be to just increase the delay to 1000ms so that this won't be an issue anymore but I'd rather understand why this issue occurs and how to fix it properly.
Cheers!
As far as I can see, your times are printed as delta to the prev. entry.
In case of the 125/875ms you have 8 intermediate entries with each roughly 15ms (sum roughly 120ms)
In case of 250/1000ms you have 8 intermediate entries with each roughly 5ms (sum roughly 40ms) and the numbers are actually more like 215/960ms.
So, if you add those intermediate delays, the resulting complete delay is roughly the same as far as I can tell.
Answering the question for everyone who just wants a yes / no on the question title: The First Rule of Programming: It's Always Your Fault
It's save to assume, that Task.Delay covers at least the specified amount of time (might be more due to clock resolution). So if it seems to cover a smaller timespan, then the method used to test the actual delay is faulty somehow.
Backgound: I must call a web service call 1500 times which takes roughly 1.3 seconds to complete. (No control over this 3rd party API.) total Time = 1500 * 1.3 = 1950 seconds / 60 seconds = 32 minutes roughly.
I came up with what I though was a good solution however it did not pan out that great.
So I changed the calls to async web calls thinking this would dramatically help my results it did not.
Example Code:
Pre-Optimizations:
foreach (var elmKeyDataElementNamed in findResponse.Keys)
{
var getRequest = new ElementMasterGetRequest
{
Key = new elmFullKey
{
CmpCode = CodaServiceSettings.CompanyCode,
Code = elmKeyDataElementNamed.Code,
Level = filterLevel
}
};
ElementMasterGetResponse getResponse;
_elementMasterServiceClient.Get(new MasterOptions(), getRequest, out getResponse);
elementList.Add(new CodaElement { Element = getResponse.Element, SearchCode = filterCode });
}
With Optimizations:
var tasks = findResponse.Keys.Select(elmKeyDataElementNamed => new ElementMasterGetRequest
{
Key = new elmFullKey
{
CmpCode = CodaServiceSettings.CompanyCode,
Code = elmKeyDataElementNamed.Code,
Level = filterLevel
}
}).Select(getRequest => _elementMasterServiceClient.GetAsync(new MasterOptions(), getRequest)).ToList();
Task.WaitAll(tasks.ToArray());
elementList.AddRange(tasks.Select(p => new CodaElement
{
Element = p.Result.GetResponse.Element,
SearchCode = filterCode
}));
Smaller Sampling Example:
So to easily test I did a smaller sampling of 40 records this took 60 seconds with no optimizations with the optimizations it only took 50 seconds. I would have though it would have been closer to 30 or better.
I used wireshark to watch the transactions come through and realized the async way was not sending as fast I assumed it would have.
Async requests captured
Normal no optimization
You can see that the asnyc pushes a few very fast then drops off...
Also note that between requests 10 and 11 it took nearly 3 seconds.
Is the overhead for creating threads for the tasks that slow that it takes seconds?
Note: The tasks I am referring to are the 4.5 TAP task library.
Why wouldn't the request come faster than that.
I was told the Apache web server I was hitting could hold 200 max threads so I don't see an issue there..
Am I not thinking about this clearly?
When calling web services are there little advantages from async requests?
Do I have a code mistake?
Any ideas would be great.
After many days of searching I found this post that solved my problem:
Trying to run multiple HTTP requests in parallel, but being limited by Windows (registry)
The reason that the request was not hitting the server quicker was due too the my client side code and nothing to do with the server. By default C# only allows 2 concurrent requests.
see here: http://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.defaultconnectionlimit.aspx
I simply added this line of code and then all request came through in milliseconds.
System.Net.ServicePointManager.DefaultConnectionLimit = 50;
My application could have up to roughly 100 requests for a batch job within a few milliseconds but in actuality, these job requests are being masked as one job request.
To fix this issue so that only one job request is just not feasible at the moment.
A workaround that I have thought is to program my application to fulfill only 1 batch job every x milliseconds, in this case I was thinking of 200 milliseconds, and ignore any other batch job that may come in within those 200 milliseconds or when my batch job have completed. After those 200 milliseconds are up or when the batch job is completed, my application will wait and accept 1 job request from that time on and it will not process any requests that may have been ignored before. Once my application accepts another job requests, it will repeat the cycle above.
What's the best way of doing this using .Net 4.0? Are there any boiler plate code that I can simply follow as a guide?
Update
Sorry for being unclear. I have added more details about my scenario. Also I just realized that my proposed workaround above will not work. Sorry guys, lol. Here's some background information.
I have an application that builds an index using files in a specified directory. When a file is added, deleted or modified in this directory, my application listens for these events using a FileSystemWatcher and re-indexes these files. The problem is that around 100 files can be added, deleted or modified by an external process and they occur very quickly, ie: within a few milliseconds. My end goal is to re-index these files after the last file change have occurred by the external process. The best solution is to modify the external process to signal my application when it has finished modifying the files I'm listening to but that's not feasible at the moment. Thus, I have to create a workaround.
A workaround that may solve my problem is to wait for the first file change. When the first file change have occurred, wait 200 milliseconds for any other subsequent file changes. Why 200 milliseconds? Because I'm hoping and confident that the external process can perform its file changes within 200 milliseconds. Once my application have waited for 200 milliseconds, I would like it to start a task that will re-index the files and go through another cycle of listening to a file change.
What's the best way of doing this?
Again, sorry for the confusion.
This question is a bit too high level to guess at.
My guess is your application is run as a service, you have your requests come into your application and arrive in a queue to be processed. And every 200 ms, you wake the queue and pop and item off for processing.
I'm confused about the "masked as one job request". Since you mentioned you will "ignore any other batch job", my guess is you haven't arranged your code to accept the incoming requests in a queue.
Regardless, you will generally always have one application process running (your service) and if you choose you could spawn a new thread for each item you process in the queue. You can monitor how much cpu/memory utilization this required and adjust the firing time (200ms) accordingly.
I may not be accurately understanding the problem, but my recommendation is to use the singleton pattern to work around this issue.
With the singleton approach, you can implement a lock on an object (the access method could potentially be something along the lines of BatchProcessor::GetBatchResults) that would then lock all requests to the batch job results object. Once the batch has finished, the lock will be released, and the underlying object, will have the results of the batch job available.
Please keep in mind that this is a "work around". There may be a better solution that involves looking into and changing the underlying business logic that causes multiple requests to come in for a job that's processing on demand.
Update:
Here is a link for information regarding Singleton (includes code examples): http://msdn.microsoft.com/en-us/library/ff650316.aspx
It is my understanding that the poster has some sort of an application that sits and waits for incoming requests to perform a batch job. The problem that he is receiving multiple requests within a short period of time that should actually have come in as just a single request. And, unfortunately, he is not able to solve this problem.
So, his solution is to assume that all requests received within a 200 ms timespan are the same, and to only process these once. My concern with this would be whether this assumption is correct or not? This entirely depends on the sending systems and the environment in which this is being used. The general idea to be able to do this would be to update a lastReceived date/time when a request is processed. Then when a new request comes in, compare the current date/time to the lastReceived date/time and only process it if the difference is greater than 200 ms.
Other possible solutions:
You said you could not modify the sending application so only one job request was sent, but could you add additional information to it, for instance a unique identifier?
Could you store the parameters from the last job request and compare it with the next job request and only process them if they are different?
Based on your Update
Here is an example how you could wait 200ms using a Timer:
static Timer timer;
static int waitTime = 200; //in ms
static void Main(string[] args)
{
FileSystemWatcher fsw = new FileSystemWatcher();
fsw.Path = #"C:\temp\";
fsw.Created += new FileSystemEventHandler(fsw_Created);
fsw.EnableRaisingEvents = true;
Console.ReadLine();
}
static void fsw_Created(object sender, FileSystemEventArgs e)
{
DateTime currTime = DateTime.Now;
if (timer == null)
{
Console.WriteLine("Started # " + currTime);
timer = new Timer();
timer.Interval = waitTime;
timer.Elapsed += new ElapsedEventHandler(timer_Elapsed);
timer.Start();
}
else
{
Console.WriteLine("Ignored # " + currTime);
}
}
static void timer_Elapsed(object sender, ElapsedEventArgs e)
{
//Start task here
Console.WriteLine("Elapsed # " + DateTime.Now);
timer = null;
}
What is a reasonable amount of time to wait for a web request to return? I know this is maybe a little loaded as a question, but all I am trying to do is verify if a web page is available.
Maybe there is a better way?
try
{
// Create the web request
HttpWebRequest request = WebRequest.Create(this.getUri()) as HttpWebRequest;
request.Credentials = System.Net.CredentialCache.DefaultCredentials;
// 2 minutes for timeout
request.Timeout = 120 * 1000;
if (request != null)
{
// Get response
response = request.GetResponse() as HttpWebResponse;
connectedToUrl = processResponseCode(response);
}
else
{
logger.Fatal(getFatalMessage());
string error = string.Empty;
}
}
catch (WebException we)
{
...
}
catch (Exception e)
{
...
}
You need to consider how long the consumer of the web service is going to take e.g. if you are connecting to a DB web server and you run a lengthy query, you need to make the web service timeout longer then the time the query will take. Otherwise, the web service will (erroneously) time out.
I also use something like (consumer time) + 10 seconds.
Offhand I'd allow 10 seconds, but it really depends on what kind of network connection the code will be running with. Try running some test pings over a period of a few days/weeks to see what the typical response time is.
I would measure how long it takes for pages that do exist to respond. If they all respond in about the same amount of time, then I would set the timeout period to approximately double that amount.
Just wanted to add that a lot of the time I'll use an adaptive timeout. Could be a simple metric like:
period += (numTimeouts/numRequests > .01 ? someConstant: 0);
checked whenever you hit a timeout to try and keep timeouts under 1% (for example). Just be careful about decrementing it too low :)
The reasonable amount of time to wait for a web request may differ from one server to the next. If a server is at the far end of a high-delay link then clearly it will take longer to respond than when it is in the next room. But two minutes seems like it's more than ample time for a server to respond. The default timeout value for the PING command is expressed in seconds, not minutes. I suggest you look into the timeout values that are used by networking utilities like PING or TRACERT for inspiration.
I guess this depends on two things:
network speed/load (as others wrote, using ping might give you an idea about this)
the kind of page you are calling: e.g. is it a static HTML page or is it a page which might do some time-consuming operations (DB access, etc.)
Anyway, I think 2 minutes is a lot of time. I would definitely reduce the timeout to less than 30 seconds.
I realize this doesn't directly answer your question, but then an "answer" to this question is a little tough. Anyway, a tool I've used gomez in the past to measure page load times from various parts of the world. It's free and if you haven't done this kind of testing before it might be helpful in terms of giving you a firm idea of what typical page load times are for a given page from a given location.
I would only wait (MAX) 30 seconds probably closer to 15. It really depends on what you are doing and what the result is of unsuccessful connection. As I am sure you know there is lots of reason why you could get a timeout...