Data persistence for sending customer emails - c#

I'm developing a system to handle sending transactional emails to our customers.
This is how it works:
1. An event occurs during the order's life cycle, for example 'shipped'
2. This event will trigger the creation of an email in the database (email queue)
3. A separate windows service is polling the db table for new emails to send. When it finds one it calls a Web service with all the required data. It's the Web service's responsibility to handle the actual sending of the email.
My question relates to step 2.
When an email triggering event occurs, should I take a snapshot of all the data required by the service (thereby duplicating data and introducing new tables) or should I get the required data from the transactional db tables only at the point where I'm ready to call the Web service.

It totally depend on your data volume. If you have large amount of data then go with first solution that is de-normalize data in another separate table (you may have duplication) and then send email.

Related

Concurrent Instances - Data / Design problem

I'm trying to come up with a design for an application (C#/Avalonia) which will allow creating views for data coming from multiple sources. The overall idea is to link the sources and present outcome using various visualization components.
There are multiple sources of data:
Database 1
Database 2/3/4
SOAP
Database 1 is going to be used to store everything related to the application itself (users, permissions and so on).
Databases 2-4+ are only data feeds.
SOAP - this is where I struggle, not quite sure how to handle. There could be 10-50 concurrent instances of the application running, and each one of them could request the same data update from SOAP (provider restrictions would make it impossible).
What I was thinking was to take the following approach:
Request initial data from SOAP
Cache the data in database 1 with a timestamp
Define a delay between the requests
Once a user requests a data update from SOAP, check if we should return cached or fresh data based on timestamp and delay value.
This approach leads to an issue where the user terminates the application in the middle of requesting new data.
User 1 requests new data, marks database to ensure no future requests are processed
User 2 requests new data - nothing happens at this stage, wait and query again
User 1 terminates - no new data for any users
Is the approach completely wrong and trying to handle it using Client only would be a suicide?

ASP.NET + Entity Framework - handling intermittent traffic spikes

I have an MVC and WebAPI application that needs to log activities performed by the users back to my database. This is almost always a single insert into a table that have less than 5 columns (i.e. very little data is crossing the wire). The data interface that I am currently using is Entity Framework 6
Every once in a while, I'll get a large number of users needing to log that they performed a single activity. In this case, "Large Number" could be a couple hundred requests every second. This typically will only last for a few minutes at most. The rest of the time, I see very manageable traffic to the site.
When the traffic spikes, Some of my clients are getting timeout errors because the page doesn't finish loading until the server has inserted the data into the database. Now, the actual inserting of the data into the database isn't necessary for the user to continue on using the application, so I can cache these requests somewhere locally, and then batch insert them later.
Is there any good solutions for ASP.NET MVC to buffer incoming request data and then batch insert them into the database every few seconds?
As for my environment, I have several servers running Server 2012 R2 in a load balanced Web Farm. I would prefer to stay stateless if at all possible, because users might hit different servers per request.
When the traffic spikes, Some of my clients are getting timeout errors because the page doesn't finish loading until the server has inserted the data into the database.
I would suggest using a message queue. Have the website rendering code simply post an object to the queue representing the action, and have a separate process (e.g. Windows Service) read off the queue and write to the database using Entity Framework.
UPDATE
Alternatively you could log access to a file (fast), and have a separate process read the file and write the information into your database.
I prefer the message queue option, but it does add another piece of architecture.

What should I use to be sure that a block of code is executed together in C#?

I have a 3 layer web app in C#.
I have a simple method in the business layer that calls another one in the database layer to insert info in the database.
When the control return to the business layer, I checked the result variable, and if it's positive it means that the info was inserted in the database. Then, if positive, I called another method to send an email.
I was wondering, that would happen if the server goes offline just in the middle of this? For example just after the info was inserted but before the mail was dispatched.
How can I solve this situation and make this block of code run in an atomic way? Using a transaction? (not sure how to use one through different methods in different class libraries).
Many thanks.
Separate these issues. Have your business layer write the values, with a new field "MailSent" or something set to False. Have another service poll the results table for unsent mails, and work through those.
You can run all of your database operations within a transaction, but you truly can't ensure that the mail is being sent out in the middle of the transaction. Even thought you can dispatch an email to a smtp server for delivery, the mail DELIVERY IS NOT GUARANTEED ANYHOW!!!
The mail server may be unable to connect outward toward the internet or wherever it has to relay mail to.
It may be able to connect out, but weird stuff happens and the message may be delayed (as in the case where it connects right away, but connection is dropped)
Don't drive yourself crazy. It's a short drive.
Sending an email via relay sometimes takes time, so you do not want to wait whether email is fail or successful.
However, there is not right or wrong answer. Here is my two cents.
As you said, second code of block is sending email. In our sites, we use separate process to send out emails. Here is how it works –
Use transaction to enter information to Info table and EmailQueue table in database
Background process picks up emails from EmailQueue (let say every 5 minutes) and send out email
If email is successful, mark the email as sent
If email is fail, increase the attempt counter until it reach some limits (let say 3 times)
If server goes offline like you said and come backup again, the background process will pick up emails from EmailQueue (which haven't been send and less than attempt limits).
Inspite of returning a boolean result. Maintain a table called Outbox and insert a row in it if condition satisfies.
And when control comes back to your business layer as mentioned in your question. Process all the outbox mails i.e., sent all the mails present in outbox and delete the entry from outbox or update a column in Outbox table which will represent email status.

Email sending strategy with C#/.NET

I have a web application from which emails should be sent after specific actions. I have some alternatives for handling this I'm not sure which one is the best.
The first is that, when a user does an action the email is being sent directly from the ASP.NET application. But I think this is not a really reliable system because if the SMTP server is down or something else happens, the user just gets a feedback that his action cannot be completed.
As an alternative I was thinking about implementing a queuing system for what I have some ideas:
Push emails to send, into a database table, and a service application periodically checks for new messages, then sends them. On successful send it marks the email task completed.
Use MSMQ for queing. In this case the whole email could be passed as a message; or the other way is to store the message with attachments into a db table, and pass only the data which is required to query the db table and send the message. In this case I don't have to deal with size limits of MSMQ (because of attachments).
something else, like a local WCF service to notify the service
Which way you think is the best?
Use MSMQ is not good solution since has a limitation of 4 MB of each size. http://blogs.msdn.com/b/johnbreakwell/archive/2007/08/22/why-is-there-a-4mb-limit-on-msmq-messages.aspx
Worse case scenario, if MSMQ is failed like it process throw error or suddenly shutdown, it will loss many message. In my case, this solution is good when hardware and software is instaled in almost ideal
Use database and window service is better since it is a simple and doesn't need much effort.
I usually use a combination of database and file. The database contains table to save a header information and a flag that message has been action (either success or error or else) and files contains message (either html or plain) and attachment in original format.
When process is run to send, it is quicker to assemble a message from files rather than from querying blob/clob.
Since they are using file system on the application, you can add hardware like server or components or else to add availibility of the system easily.
Database can be added too, but it will cost you more license in databse software.
I add a test send email after send email in x times to make sure it is works well; this test email is send to my self or dummy inbox and an application to check the test email that is the same email that send and receive. If it is the same, sending pending email will continue again
Another way if you are using MS Exchange, you can use message queue by utilize its web service to queue send. This is an easy way but you need license.
You can see on MSDN library how to utilize MS Exchange web service.
You can use an email server like hmail. In order to push emails into a queue, you can push them to a mail server. To do that, you can write a windows form application that has a timer object that checks every row that has a Status 0(not sent) in email table. When the thread sends it to the mail server, it will be marked as 1(sent).
You can also classify your emails if you use DB. Different actions can send different emails. You can store this info in DB also so that your windows form application thread will now which email template to send.

Building a Email Sender Service

I have a couple of web applications which all utilize sending emails whether it be by contact form, or some kind of notification updates etc.
The problem I have found is that there isn't really any way to track the emails which are being sent from the web applications, so I've come up with a possible solution:
It's pretty straight forward really - instead of having each web application sending the emails themselves I would like to unify the process by creating a central Email Sender Service.
In basic terms, each application would just create a row in a 'Outbound Emails' table on the database with To,From,Subject,Content data.
The Email Sender Service (Win Service) would then pick the emails from the outbox, send them and then mark as sent.
Even though I would store 'basic email' information (to,from,subject,content) in the database, what I would really like to do is also store the 'MailMessage' object itself so that the Email Sender Service could then de-serialize the original MailMessage as this would allow any application to fully customize the email.
Are there any problems with using the MailMessage object in this way?
Update: Another objective, is to store a log of emails that have been sent - hence the reason for using a database.
A far better architecture is to have the applications call some sort of public interface on the send email service. The service itself can then be responsible for recording send in a database.
This architecture means that the database becomes internal to the service and so reduces the coupling between your applications (each application knows about a relatively small public contract rather than a database schema). It also means that if you do find some problem with storing MailMessage objects in the database then you can change the storage method without updating all of your clients.
Why use the database? Simply have the applications call your email service directly, providing all information.
If you'd like to queue up the sends, then you can use a net.msmq binding with WCF, which will store the requests in a reliable queue that the service would read from. All of this would be done for you.

Categories