I'm having an issue sending large volumes of emails out from an ASP.Net application. I won't post the code, but instead explain what's going on. The code should send emails to 4000 recipients but seems to stall at 385/387.
The code creates the content for the email in a string.
It then selects a list of email address to send to.
Looping through the data via a datareader it picks out the email address and sends an email.
The email sending is done by a separate method which can handle failures and returns it's outcome.
As each record is sent I produce an XML node in an XML document to log each specific attempt to send.
The loop seems to end prematurely and the XML document is saved to disk.
Now I know the code works. I have run it locally using the same SMTP machine and it worked fine with 500 records. Granted there was less content, but I can't see how that would make any difference.
I don't think the page itself times out, but even if it did, I was sure .Net would continue processing the page, even if the user saw a page time out error.
Any suggestions appreciate because I'm pretty stumped.
You're sending lots of emails. During the span of a single request? IIS will kill a request if it takes longer than a certain (configurable) amount of time.
You need to use a separate process to do stuff like this. Whether that's a Timer you start from within global.asax, or a Thread which checks for a list of emails in a database/app_data directory, or a service you send a request to via WCF, or some combination of these.
The way I've handled this in the past is to queue the emails into a SQL Server table and then launch another thread to actually process/send the emails. Another aspx utility page can give me the status of the queue or restart the processing.
I also highly recommend that use an existing, legit, third-party mailing service for your SMTP server if you are sending mail out to the general public. Otherwise you run the risk of your ISP shutting off your mail access or (worse) your own server being blacklisted.
If the web server has a timeout setting, it will kill the page if it runs too long.
I recommend you check the value of HttpServerUtility.ScriptTimeout - if this is set then when a script has run for that length of time, it will be shut down.
Something you could do to help is go completely old-school - combine some Response.Writes with a few Response.Flush to send some data back to the client browser, and this tends to keep the script alive (certainly worked on an old ASP.NET 1.1 site we had).
Also, you need to take into account when this script is being run - the server may well also have been configured to perform an application reset (by default this is set to every 29 hours in IIS), if your server is set to something like 24 hours and this coincides with the time your script it run, you could be seeing that too - although the fact that the script's logging its response probably rules that out - unless your XML document is badly formed?
All that being said, I'd go with Will's answer of using a seperate process (not just a thread hosted by the site), or as Bryan said, go with a proper mailing service, which will help you with things like bounce backs, click tracking, reporting, open counts, etc, etc.
Related
I have an application written in ASP.NET C# that should run for about 2 days and then stop until it is started again.
The application sometimes runs into some trouble that, at the moment, requires manual intervention to make it continue.
(So far, SQL server in itself was never the problem)
When working, the application writes a line to a log database table at least every 20 minutes.
If these lines are not there, it means the application either finished the task or needs help to continue. In these cases, I would like to get an email or some other notification.
What is the best way to detect and notify me if or when the application has stopped writing these lines to the log database table?
If it makes things easier, I could make the application perform some other regular task, such as writing to a file or sending a TCP/UDP packet somewhere.
Aside of the obvious concerns with the design of this application. Given your current design, you could potentially write a windows service to monitor the log file. If it's older than N minutes, fire off an System.Net.Mail.MailMessage.
If you're able to make the application do something like sending packets, is there any reason you couldn't make the application send its own alerts? The alert needs to be sent when the application has finished, or when it needs "help" (I'm assuming some sort of manual input or confirmation). It's reasonable to assume that the application can tell when either of these things have happened, so... why can't the application send an alert? ASP.NET apps can access SMTP and send emails really easily using the System.Net.Mail.SmtpClient. It seems really odd to me to have a separate program sending emails on behalf of a program that has access to a perfectly good email client class.
You can send emails from the global application error handler (Application_Error in Global.asax). Windows service monitoring the database is another solution, but it may be better to write a short script/command line application and run it with windows scheduler. It has lower footprint and it is easier to write and install.
I am currently working on a project which I think using soap as part of it would be a good idea but I can't find how it will work in the way that I need.
I have a C# Console Application called ConsoleApp, ConsoleApp will also have a PHP web interface. What I'm thinking of doing, is the PHP web interface controls the ConsoleApp in some way, so I click a button on the web interface, then this does a sends a soap request to a soap service and then the soap service, sends the information on to the consoleApp, and the result is returned back to the SoapService and then returned back to PHP.
This seems like it would need to separate soap services, one for php to interface with and one within the ConsoleApp but this doesn't sound right, I think I might be misunderstanding the purpose of Soap.
How can this be achieved. Thanks for any help you can provide
UPDATE
As requested I thought I'd add a bit more information on what I am trying to achieve.
In the console app, it is acting as an email server sending out emails that are given to the program and then being sent on, and if it can't send it retries a couple of times until the email goes into a failed state.
The web interface will provide a status of what the email server is doing, i.e. how many emails are incoming, how many are yet to be processed, how many have sent and how many have failed.
From the web page you will be able to shutdown or restart the email server or put one of the failed emails back into the the queue to be processed.
The idea is, when the user adds a failed email back into the queue it sends a soap message that the console app will receive, add the information back into the queue, log the event in the console apps log file, increment a counter which is how it keep track of emails that need to be processed. Once this has been done it should then send a response back to the web interface to say whether or not the email was successfully added back into the queue or whether it failed for some reason.
I don't really want to keep on polling the database every so many seconds as there could be the potential for their to be a large number of emails that will be being processed so polling the database would put a large load on the MySQL server which I don't want, which is why I thought soap as the email server would only need to do something when it receives a soap request to do something.
Thanks for any help.
Every web service is going to need a client (in your case PHP) and a server (ConsoleApp). Even though there are two endpoints, it is still one web service. Your PHP will send a SOAP request which ConsoleApp will receive, process and respond to with a SOAP response.
So when someone clicks the button on the web page, you can use JavaScript to build and send the SOAP envelope in the browser. The alternative is to POST the values to a PHP page that will build and send the SOAP.
I have to admit though, your scenario sounds a unusual. I personally haven't heard of web pages talking directly with console apps. Web pages usually talk to web servers, and the servers are usually the ones issuing atypical requests, like your request to ConsoleApp. While it is technically possible, but I think it is going to be harder then you are expecting.
Personally, I would ditch SOAP in favor of a much more simple and scalable solution. Assuming you have access to a database, I would have the PHP create a record in the database when the user clicks the button. ConsoleApp would then poll the database every X seconds to look for new records. When it finds a new record, it processes it.
This has the benefit of being simple (database access is almost always easier than SOAP) and scalable (you could easily run an arbitrary number of ConsoleApps to process all of the incoming requests if you are expecting heavy loads). Also, neither the PHP page nor the ConsoleApp have a direct dependency on the other so each individual component is less likely to cause a failure in the whole system.
In your opinion (hopefully one that is formed based on fact, as opposed to emotion) what is the better way to send out email notifications from a website?
For example, say User A on your site requests a friendship with User B, at which point you would generate an email to send to User B.
The question is - when is the best time to send the email? Immediately, as part of the same execution path, or scheduling the email as part of a batch?
Like I said, my question is rather generalized, so you can assume different architectures - one server dedicated to hosting, another dedicated to emailing, a single server, cloud hosting, etc... I'm curious about all answers, really.
As I see it:
With immediate emails, you get timely emails, but you can potentially bog down your server by sending too many emails should your website receive a lot of traffic. That being said, because you're not sending a batch of emails, they are all one-offs.
If you batch your emails and have a scheduled task or cron job pick them up and send them, your emails are not as immediate - so assume you decrease the interval so that batches are sent every 1 minute. The issue, as I see it is concurrency - if another batch kicks off before the first one completes, you could risk sending double emails if you don't appropriately flag or lock what you're sending.
In my personal experience, when I've had emails sent off immediately on a high traffic site, performance wasn't impacted too much, though a number of emails failed to send out.
Thoughts?
I would say definitely schedule them. There has to be a tollerance in terms of user request and server action on it, as if someone is able to make someone other a friend, it also (I hope) is able to refuse a friendship with the same person. If so, what if I make fast accept and refuse clicks on your website. ?
You have 2 options in this case, imo:
like a SO does, add some timing on user clicks (you can not accept and refuse in 2 seconds)
or you can, but at this point final message to the person whom friendship was accepted/requested is scheduled on the server and will send, say, after 30 minutes (or less, matter of architect choice)
Hope this helps.
I have many applications across an enterprise environment and they all use different methods of sending emails. Some send directly through an exchange server, some queue up locally in an SMTP queue and others call a a web service that then sends the email.
I'm trying to decide on the best way to get guaranteed delivery of emails. If our Exchange server goes down, then the applications that send to it directly can no longer send emails, also any emails sent during the down time never get anywhere. I would also like to implement a universal templating solution that all applications can share.
Are there any pre-built solutions to this problem, or do you have an insight on how to handle this issue?
We solved this by creating a web service that sends all our emails. This web service uses the
System.Net.Mail.SmtpDeliveryMethod.PickupDirectoryFromIis
setting, which essentially saves the files to a spot on the disk, tries to send them via the main SMTP server, and if the server is unavailable, they sit in the directory until it BECOMES available.
Guaranteed delivery, as long as the web service is up. Since we have redundancy checks in place, this is almost never an issue. If it is, we treat it as an error in code and handle it.
edit - added
I forgot to mention that XSS is a concern even in an email, so be sure to use something like the Microsoft.Security.AntiXss library, which contains functions like GetSafeHtmlFragment to strip out potentially dangerous scripts before outputting html to an email.
http://msdn.microsoft.com/en-us/security/aa973814.aspx
http://msdn.microsoft.com/en-us/library/aa973813.aspx
I have heard good feedback about Postmark. Maybe a service like that could be solution as it has several integration points.
http://postmarkapp.com
We have used HMailServer (on windows platform) which is freeware. Configured the max retries to too many & used external smtp to relay the emails. our applications ques up emails on the HMailServer and that server relays it further. with the max retries configured to be many if at all the main smtp servers are down we can assure that the email are delivered - but though there is no gurantee if there is huge downtime with main smtp relay servers.
I do this by queuing email to a SQL server database. Any ACID compliant database will work or you can use MongoDB with 'safe mode' inserts but if you really need guaranteed then use SQL server or MySQL. This way if your mail gets into the database it is 'guaranteed' not to be lost and your app doesn't have to think about it. You could use a web service or just make a shared assembly with a static public method in a class to drop your email in the db for you.
Include a column for status like 'new', 'delivered', 'recipient mailbox temporarily full', which you can represent with numeric values and keep a TimeToSend column, which starts out as the time when the email is queued in the database.
Then you have a mail app, that you can have run once a minute as a windows scheduled task. Make it as a console app. When it loads, it checks if an instance of it already running and if there is one, it exits. When running:
1. Attempts to deliver each mail to mail server. Query the database for all mail where the TimeToSend is older than now.
2. If mail is delivered to mail server, mark it logical deleted.
3. If any mail can't be delivered, advance the TimeToSend column for them to 10 minutes from now.
4. Delete records from table that are logically deleted. You can do this in the app or you can do it by having a sql job do it.
As mentioned earlier, you can utilize a web service that you can usually POST JSON to using an HTTP request. Here are a bunch of choices:
PostageApp (Ours!)
SendGrid
PostmarkApp
Amazon SES
They all have different feature sets and offerings, so definitely give them all a spin and figure out which you prefer.
(Full Disclosure: I am the Product Manager of PostageApp.)
I am building a web site, and the client wants a newsletter "system" on it.
How do I send this kinds of mass (>1000) emails?
I read somewhere that using sendasync method of the smtpclient does the trick.
But it constantly gives me an "Email faliure" exception. And, I don't know what to do with that right there...
So, basically my question is, is it ok to send the emails using the SEND method of the smtpclient, but each mail in it's own thread.
for eg.
NewsletterEmail newsletterEmail = new NewsletterEmail(emailAdress[i], mailSubject, mailBody);
Thread t = new Thread(new ThreadStart(newsletterEmail.MakeAndSendEmail));
t.IsBackground = true;
t.Start();
i think , you should rethink about your startegey for sending bulk emails
Creating > 1000 threads is not a good idea , it may even crash your server or it may make your server respond very slow.
Tell your client about Constant Contact. They are going to handle this much better than you ever could. It's also cheaper than your time.
In the event that fails you have a couple options.
If they already have an email server, leverage that to do the email broadcasting. In other words, relay the mail through that server.
If you can't do that, go download a free email server. I've been using hMailserver. Set it up and relay through it.
If you can't do that, write your own SMTP processing engine. Don't attempt to send the emails directly from ASP.Net. Queue them up in a database and write a windows service to handle the mail broadcasting.
Sending emails can sometimes take several seconds per email. This could completely hose your website while it's trying to handle sending 1000 emails.
A number of mail servers are configured with grey listing, meaning that they require you to send the same email twice in order to prove you aren't a spammer.
Next, getting the DNS appropriately set up can be a PITA. Which is why I suggest constant contact. I have one client that took nearly 5 years to finally get their DNS configured; and yes, I gave them explicit instructions once a year on what to do. Reverse DNS is critical.
Another thing is that some recipient servers have a limit on the number of threads they will accept from you at once. Most mail servers are built to take this into consideration. If you cross that boundary, then the recipient servers will consider you a spammer and take appropriate action.
Another problem area is in sending to a bad address, over and over again. AOL and others will consider you a spammer just for this one thing.
Point is, you really don't want to write this yourself.
Your best bet would be to have a separate process send the emails. Either have it run on a schedule to check for emails that need to be sent (maybe store the emails in a table?), or, if you don't like the scheduled process idea, then you can have a console application that is started by your website.
Something else to keep in mind is that if you are sending too many emails in a short period of time, it becomes very easy to get black-listed and then none of the emails from your domain will make it to any servers that have you black-listed.
I've taken two approaches:
1) Lazy, sloppy approach of configuring the asp.net process timeout long enough to complete via Send
2) Create a console app that is spawned by the web app.
I built a similar system a year or two ago. There's so many things that can go wrong when you send an email programmatically. For this reason, do yourself a favor and seperate the messages from the process of sending them. This way, you can "teach" your system that "badEmail#goodDomian.com" should be ignored or other similar situations.
You can store the message, subject, or whatever level of seperation of data you desire in the database along with a flag for meta-data like "SentOn", "FailedOn", etc. I sent my message one at a time to allow for individual errors to be stored and/or handled. I used SmtpMail.Send(), but whatever method you choose should work as long as you built something smart and recoverable.
Have a look at the MailChimp API: http://www.mailchimp.com/api/gettingstarted/