Why does sending many emails asynchronously block the site? ( using Action.BeginInvoke ) - c#

I created a page to send thousands of emails to our clients, almost 8K emails.
The sending process is taking hours, but after a while I couldn't access any page (get waiting...) in the site that is hosting the page except for static files (images etc...).
Using: IIS 6 and .Net 4.0
Code:
public static bool Send(MailSettings settings, Action<string, string[], bool> Sent = null)
{
System.Net.Mail.SmtpClient client;
...
foreach(){
try{ client.Send(message);}catch{...client.Dispose();...}
Sent.BeginInvoke(stringValue, stringArray, boolValue, null, null);
if(count++>N){
count=1;
System.Threading.Thread.Sleep(1000);
}
}
...
}
public void SentComplete(string email, string[] value, bool isSent)
{
....//DB logging
}
Note: Other sites using the same Application pool were fine!
Questions:
Is there a IIS 6.0 limitation to the number of threads for the same website?
Any Ideas if my code was causing any performance issues? Am I using Action right?

There are many things wrong with this code:
Your try catch block is very weird. You are disposing an object in
one iteration but it can be used by others. Try using block instead.
There is a maximum time for asp.net request to execute.
Don't put thread sleep in asp.net!
Yes, there is a maximum thread count in asp.net pool
If you end up blocking for one reason, you can also be blocked by the session (if you have one)... Does it work if you open a different browser?

You're firing off a whole bunch of actions to be executed in the thread pool. There is a max number of threads that the thread pool will create, after that the work items sent to the pool simply get queued up. Since you're flooding the thread pool with so many operations you're preventing the thread pool from ever having an opportunity to get a chance to work on the items added by ASP.NET to handle pages. Since static items don't need to push work to the thread pool, those items can be serviced.
You shouldn't be firing off so many items in parallel. You should be limiting the degree of parallelism to a reasonably small fixed amount. Let those handful of items that you start each process a large number of operations that you have so that the threads in the thread pool have the possibility of working on other things as well.

We regularly send 10000 client emails. I store all the details of the email in a database and then call a web service to send them. This just chuggs through them and affects nothing else. I do put the thread to sleep (in the web service) for 100ms between each call to Send ... if I don't do this firing off so many emails seems to overwhelm our mail server and we get some odd things happening.

Related

C# - Best way to continuously check a process

Scenario: A Azure WebJob that will get all the Vendor record from NetSuite via WSDL.
Problem: The dataset is too large. Even with service set to 12 minutes time out. It still time out and the code failed.
NetSuite have a async process that basically run whatever you want on the server and it will return a JobId that allowed you to check the process on the server.
What I did currently is by making a search call first asking for all the Vendor records and it is to be process on the server. After I got the JobId, i wrote a void Recursion that check if the job is finish on the server with Thread Sleep set to 10 minutes.
private static bool ChkProcess(VendorsService vendorService, string jobId)
{
var isJobDone = false;
//Recursion
void ChkAsyncProgress(bool isFinish)
{
if (isFinish) return;
var chkJobProgress = vendorService.NsCheckProcessStatus(jobId);
if (chkJobProgress.OperationResult.IsFinish) isJobDone = true;
Thread.Sleep(TimeSpan.FromMinutes(10));
ChkAsyncProgress(isJobDone);
}
ChkAsyncProgress(false);
return isJobDone;
}
It work but is there a better approach?
Thanks
I think that since you're working with Azure already, with Service BUS you can implement a really low cost solution for this (if not free, depending on how much frequent is your job running)
Basically it's a queue where you enqueue messages (which can be objects with properties too, so they could also contain your result of the elaboration potentially).
A service bus is used to enqueue.
An azure function of type ServiceBusTrigger listens automatically if any new message on service bus has arrived and gets triggered if so (or, you can set messages to be enqueued, but be available after a certain future time only).
So, in the webjob code, at the end you could add code to enqueue a message which will mark the webjob has finished elaboration.
The azure function will get immediately noticed as soon as the message gets in the queue and you can retrieve the data without polling constantly for job completion, as azure will take care of all of that for you for a ridiculous price and without any effort by you.
Also, these task aren't priced timely based, but execution based, so you will pay only when it effectively put a message in queue.
They have a certain number of executions free, so it might be that you don't even need to pay anything.
Here some microsoft code sample for doing so.

Is there any way to redirect the page first then execute the remaining code

I am new to azure web app, Is there any way to redirect the page first then execute the remaining code? I am stuck in situation where I have to redirect my page first, then execute the remaining code... Actually I have deployed my code on azure web app which has request timeout for about 4 mins (which is not configurable), my code take approx 15 min to execute, I want to redirect to main page and execute the remaining code in background. I have tried threads and parallel programming also still no luck.. I am not able to overcome the time frame my web page get request time out every time. Is there a way anyone can suggest?
Thanks for help!
/*functionA and functionB are not execute after redirecting.*/
private static async Task <int> functionA(para1, para2)
{
Task<int> temp1 = await functionB(y,z);
return int;
}
private static async Task<int> functionB(para1, para2)
{
return int;
}
/* This method will execute first */
private string functionC(para1, para2, para3)
{
console.log("hello world");
redirect.response("www.xyz.com");
Task<int> temp = await functionA(x,y);
return str; //return string type value
}
If you've got heavy processing that will result in a HTTP timeout, I suggest looking into offloading processing to a WebJob or Azure Function. It would work as follows:
Your Azure WebApp receives a HTTP request for a long-running operation. It gathers the necessary information, creates a Service Bus Queue message, and fires the message off. Your WebApp then responds to the user by telling them that the processing has begun.
Provision a separate WebJob or Azure Function that monitors your Service Bus Queue for messages. When a message is received, the WebJob/Function can perform the processing.
You will probably want to tell your user when the operation has completed and what the result is. You have a few options. The slickest would be to use SignalR to push notifications that the operation has completed to your users. A less sophisticated would be to have your WebJob/Function update a database record, then have your HTTP clients poll for the result.
I've personally used this pattern with Service Bus Queues/WebJobs/SignalR, and have been very pleased with the results.
Asynchronous operations in Azure storage queues and WebJobs can help in situation as stated
i have referred this
https://dev.office.com/patterns-and-practices-detail/2254

WCF hosted in IIS has lots of open requests and the service is slow down

I have a WCF web service based on JSON and POST method which have a function called website. This function has a simple code and only call another web service using the code bellow:
using (var cli = new MyWebClient())
{
Task<string> t = cli.UploadStringTaskAsync(myURI, "POST", request);
if (t == await Task.WhenAny(t, Task.Delay(400)))
{
response = t.Result;
}
else
{
response = "";
}
cli.Dispose();
}
and MyWebClient class is implemented as:
class MyWebClient : WebClient
{
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest request = base.GetWebRequest(address);
if (request is HttpWebRequest)
{
(request as HttpWebRequest).KeepAlive = true;
(request as HttpWebRequest).ContentType = "application/json";
}
return request;
}
}
The problem is that I can see in IIS that a large number of requests remain open more than 18 seconds and even more for 1 or 2 of my worker processes(as you can see in the attached image for one of them)!!! And this makes the service very slow down. Note that this service has about 2K requests per second and the application pool of this service has a web garden containing 12 worker processes and the queue limit is 10K. This situation takes place when there are (for example) 4 worker processes working in a predictable time(about 450 ms) and IIS shows that the maximum elapsed time on their requests are about 380.
a large number of requests that remain open in the IIS
Note that I have used cli.UploadStringTaskAsync and hence timeout is not considered for cli. So, I have to implement a code like t == await Task.WhenAny(t, Task.Delay(400)) to simulate timeout.
What is the problem do you think?! Does using await cause many context switch and the requests are queued to be executed by cpu?
Edit:
Here you can find some recommendation that is helpful, but no one can help and solve the problem. I set them up in my application's web config but it couldn't resolve my issue.
Update:
As additional information note that the network card is 1G, and at most we have 100Mgb/s bandwidth usage. There are 6 cores and 12 logical processors of Intel Xeon E5-1650 V3 3.5 Ghz. We have 128GB of RAM and 480GB of SSD.
I found a solution that solves the problem. The key point was "processModel Element (ASP.NET Settings Schema)". As I mentioned in my question:
This situation takes place when there are (for example) 4 worker processes working in a predictable time(about 450 ms) and IIS shows that the maximum elapsed time on their requests are about 380.
So, I think balancing the load among worker processes could be the problem. By configuring the processModel Element manually I have solved the issue. After researching a lot I found this valuable link about processModel Element and its properties.
Also this link describes all the properties and the effect of each item. As mentioned in this link there are 2 important properties called "requestLimit" and "requestQueueLimit":
requestQueueLimit: Specifies the number of requests that are allowed in the queue before ASP.NET begins returning the message "503 – Server Too Busy" to new requests. The default is 5000.
requestLimit: Specifies the number of requests that are allowed before ASP.NET automatically launches a new worker process to take the place of the current one. The default is Infinite.
The solution is to control and limit requestLimit by a rational number for example 300 in my case. Also, limiting requestQueueLimit by multiplication of the number of worker processes and requestLimit. I have increased the number of worker process to 20 and by this configuration 6000 request can be queued totally and each worker process has 300 request at most. By reaching 300 request per worker process ASP.NET automatically launches a new worker process to take the place of the current one.
So the load is balanced better among the worker processes. I have checked all queues and there is no request with more than 400 time elapsed!!!
I think this solution could be used as a semi-load balancer algorithm for IIS and worker processes by playing with these properties(requestLimit, requestQueueLimit and number of worker processes ).

long process in asp

My situation is this:
was created a page that will run a long process . ... This process consists in:
- Read a file. Csv, for each row of the file wil be created an invoice...in the end of this process shows a successful message.
for this it was decided to use an updatepanel so that the process is asynchronous and can display an UpdateProgress while waiting to finish the process ... for this in the property of scriptmanagment was added the AsyncPostBackTimeout = 7200 (2 hours) and also timeout was increased in the web.config of the app as in qa and production servers.
Tests were made in the localhost as a qa server and works very well, the problem arises when testing the functionality on the production server.
that makes:
file is loaded and starts the process ... during this period is running the UpdateProgress but is only taking between 1 or 2 min and ends the execution without displaying the last message, as if truncated the process. When reviewing the invoices created are creating only the top 10 records of the file.(from a file with 50,100 or + rows)
so I would like to help me with this, because i don't know what could be wrong.
asp.net is not suited for long running processes.
The default page timeout for IIS is 110 seconds (90 for .net 1.0). You can increase this, but it is not recommended.
If you must do it, here is the setting:
<system.web>
...
<httpRuntime executionTimeout="180"/>
...
<\system.web>
Refer httpRuntime
Pass on this work to a windows service, WCF or a stand alone exe.
Use your page to get the status of the process from that application.
Here is an example that shows how to use workflows for long running processes.
You move the bulk of the processing out of asp.net, and free its threads to handle page requests.

Can multiple WebClient interfere with each other?

I must build a Application that will use Webclient multiple times to retrieve every "t" seconds information from a server.
Here is a small plan to show you what I'm doing in my application:
Connect to the Web Client "USER_LOGIN" that returns me a GUID(user unique ID). I save it and keep it to use it in future Web Client calls.
Connect to the Web Client "USER_GETINFO" using the GUID I saved before as parameter. This Web Service returns an array of strings holding all my personal user information( my Name, Age, Email, etc...). => I save the array information this way: Textblock.Text = e.Result[2].
Starting a Dispatcher.Timer with a 2 seconds Tick to start my Loop. (Purpose of this is to retrieve information and update it every 2 seconds)
Connect to the Web Client "USER GETFRIEND", wich is in my Timer, giving him the GUID as parameter. It returns me an array filled with my friends informations(Name, email, message, etc...). I inserted this WebClient in the timer so my friend list refreshes every 2 seconds.
I am able to create all the steps without any error until step 3. When I call the "USER_GETFRIEND" Web Client I am facing two major problems:
On one side I noticed that my number of Thread increased dramatically. => I always thought that when a WebClient had finished its instructions it would shut down by itself, but apparently that does not happen in Asyncronous calls.
And on the other side I was surprised to see that using the same proxy for two Webclient calls(ie: if i declare test.MainSoapClient proxy = new test.MainSoapClient()), the data i would retrieve from "USER_GETFRIEND" e.Result, was sent directly to my "USER_GETINFO" array. And so my Name and Email adresses on the UI were replaced by the same value in the USER_GETFRIEND array. So my Name is changed to my friends email and so on...
I would like to know if it's possible to close a WebClient call(or Thread) that I am not using anymore to prevent any conflicts? Or if someone has any suggestion concerning my code and the way i should develop my application please feel free to propose.
I got the answer a few weeks ago and figured out it was important to answer my own question.
My whole problem was that I wasn't unsubscribing from my asynchronous calls and that I was using the same proxy class from "Add Service reference":
So when I was using:
proxy.webservice += new Eventhandler<whateverinhere>(my_method);
I never did:
proxy.webservice -= new Eventhandler<whateverinhere>(my_method);
Hope it will help someone.

Categories