I am creating a complaint registration form for user in asp.net MVC. The complaint is send to user email address and complaint is save into SQL server 2017 database. I need a suggestion.
my question is, How do I handle email and database?
condition 1 - what if database fails to save data but email is sending.
condition 2 - what if database save data to table but fails to send email.
How I did,
public void MyFunction()
{
try {
var db = DbContext();
//some code
//The number of state entries written to database
int entities = db.SaveChanges();
if (entities > 0)
{
SendEmail();//what fails to send email and complaint is created.
}
}
catch(Exception ex)
{
//exception is handle
}
}
When db.SaveChanges() is done, it return how many tables are written to database. From my homework, Second condition is most important.
The Result I want is, both process of creation and sending email is handle smoothly.
Is I am on right path or not?
Any suggestion is appreciated.
What I would do in this case is have the emails saved to a table in the database, and implement a service that reads from that table and sends any unsent emails with a retry count for each entry.
That way you can retry sending the email at any time if it fails.
In real life you'll try to handle the email processes to another service (like sendgrind), so your application will store a request to send an email after the save is done. This request will be handle by your service whenever it can; Also, you won't need to be awaiting for the email to be sent in order to finish the user's request.
Since you're doing a homework you can:
1- [Recommended] store that email in a table and have a small service reading from it, once it found a record, send an email and flagged as done.
2- [Not Fancy] Do it exactly as you are doing it. Save the record and if success send an email.
This is an interesting problem because it asks you to consider trade-offs, probabilities, and perfection vs. what's good enough.
Here's what's good enough: Update the database. If it succeeds, put a message in a queue (which could be another database table) that will result in an email being sent.
You could just send the email directly via SMTP right when the database gets updated, but there's a small chance that due to some odd transient condition the email wouldn't get sent. (That happens - maybe SMTP permissions get messed up and the email gets rejected.) Putting it in a queue with a separate process is a reasonable approach to make sure there's some resilience for your emails.
Why is that only "good enough?" Because nothing is bulletproof. It's still possible that your email might not get sent. After all, we're concerned that the database update might fail. But if we're sending the email by inserting a record into another database that some other process monitors, doesn't that mean that could fail too? What will we do if the first database update succeeds but the one to send the email fails?
That's where we start making trade-offs. It's possible that we might insert a complaint into the database but not send an email. How likely is that, and if it happens, how bad is it? And whatever we might consider doing in response to that unlikely scenario, couldn't that fail too?
That's a rabbit hole. We can endlessly make our code more and more complex trying to account for less and less likely scenarios, but it's not worth it. Eventually bugs in our overly complex code will become the reasons why something doesn't work.
Instead of chasing after that, it makes more sense to realize that sometimes our code will fail because of things we can't control, and to know what that failure will look like. Again, this is a question of what's good enough.
The initial database update fails. The user sees a message saying that their complaint was not saved. Perhaps we can provide them another way to contact us. The exception gets logged so that we can figure out what happened.
The initial database update succeeds, then we send the email message to a queue. The email process fails. That should also get logged. That might be more urgent, because if one email fails, perhaps lots of them are. Hopefully that process provides some sort of alert if it's down so someone can fix the problem.
The initial database update fails, and trying to send the email message to the queue fails. We log that. Again, hopefully there's something to let us know when stuff is failing, even if it's an email digest that someone gets.
Even that can fail. Our logging and our alerts can fail. We can replace them with something more resilient, like a better message queue. But we could drive ourselves insane trying to account for everything when it's impossible. All we can do is try to make our applications reliable and resilient.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
To give this question context we have an ASP.Net MVC Project which requires you to be authenticated to use the system (typical saas product). The project includes an inactivity timer which will log the user out if they leave the screen alone for too long. The project is an SPA type project and Web API is used to get/post relevant data.
I am currently having to develop a routine that archives a potentially huge amount of data and the process itself is fine. What I'm not sure of is once the process is started, a post is sent to web api and the server side code starts running, does it continue to run if the inactivity timeout occurs or the user logs out manually for some reason?
I assume it would but I don't like to rely on assumptions.
EDIT: For example
For below comments/answers. Screen will have a list of tickboxes for the data they wish to archive so is not a set list of data so this project does need to process the task.
The following code is on the client side when running (checks etc omitted and data variable contains all true/false values for ticks):
self.Running = true;
self.showProgress();
http.ajaxRequest("post", "/api/archive/runarchive", data)
.done(function () {
self.Running = false;
})
.fail(function () {
self.Running = false;
app.showMessage("You do not have permission to perform this action!");
});
For reference the showProgress function used to pick up progress to display on screen. This is also run when accessing the screen in case an archive process is still running it can be displayed:
self.showProgress = function () {
http.ajaxRequest("get", "/api/archive/getarchiveprocess")
.done(function (result) {
if (result.ID == -1) {
$("#progressBar").hide();
$("#btnArchive").show();
if (self.Running) setTimeout(self.showProgress, 2000);
else app.showMessage("The Archive Process has finished.");
}
else {
$("#progressBar").show();
$("#btnArchive").hide();
$("#progressBarInner").width(result.Progress + '%');
$("#progressBarInner").attr("data-original-title", result.Progress + '%');
setTimeout(self.showProgress, 2000);
}
});
};
Server Side:
[HttpPost]
public void RunArchive(dynamic data)
{
// Add table row entry for the archive process for reference and progress
// Check each tick and update tables/fields etc
// Code omitted as very long and not needed for example
// table row for reference edited during checks for showProgress function
}
So basically I'm asking if the RunArchive() function on the controller will keep running until it's finished despite user logging off and being unauthenticated in some way. I'm aware any IIS, App Pool refresh etc would.
It sounds like web api is the one doing the heavy work and once that starts it will continue to run regardless of what happens on the UI side of things.
This being said, there is a timeout for webapi requests that you can control in web.config.
You might want to consider another alternative. Whenever you're talking about heavy processing tasks, you're better offloading those to another service.
Your API is supposed to be responsive and accessible by your users and it needs to respond fast to allow for a better experience. If you get 100 users doing heavy work, your API will basically crumble.
The API could simply send commands to a queue of stuff that needs to be run and another service can pick them up and execute them. This keeps your API lightweight while the work is still being done.
You're talking about archiving which probably involves a database and there is no reason why you can't have something else do that job.
You could keep track of jobs in the database, you could build a table which holds statuses and once a job is done, the external service changes the status in the database and your UI can then show the result.
So the API could work like this:
add message to queue
add job details to db with status of "new" for example and a unique id which allows the queue item to be linked to this record.
Service B picks up the job from the queue and updates status in db to "running".
Job finishes and Service B updates status to "complete".
the UI reflects these statuses so the users know what's going on.
Something like this should would make for a better user experience I would think.
Feel free to change whatever doesn't make sense, it's a bit hard to give suggestions when you don't know the details of what needs to be done.
This Service B could be a windows service for example or whatever else you want that can do the job. The user permissions come into play in the beginning only, a work item would be added to the queue only if the user has the permission to initiate that. This gives you the certainty that only authorized jobs are added.
After that, Service B won't care about user permissions and will do the job to the end irrespective about users being logged in or not.
This is largely guess work at this point, but you should be able to get an idea of how to do this.
If you have more specific requirements you should add those to the initial question.
Even if the process isn't killed by the user logging out, you also need to consider that IIS can recycle app pools, and by default is set to do so once a day, as well as on memory contention, either of which will kill your long running process.
I would highly recommend you check out Hangfire.io, which is designed to help with long running processes in ASP.Net sites.
I am trying to write c# code around opening a Sage 300 Connection using C#. I am using the Acccpac.Advantage DLL.
Here is my code
try
{
sage300Session.Init(sessionHandle, appID, programName, appVersion);
sage300Session.Open(_user, _ppswd, _companyID, DateTime.Today, 0);
// Open a database link.
sage300DbLink = sage300Session.OpenDBLink(DBLinkType.Company, DBLinkFlags.ReadWrite);
}
The issue I am having is, no matter what I put in the password, the call to .Open seems to succeed. If I put an invalid user or companyID, I get errors as expected. (the connestion status seems to say open either way).
My question is - what is happening with the password that is doesn't seem to be used and 2- when I am through with what I am doing, is there a way to correctly close the connection?
The Accpac.Advantage dll is v 2.0.50727 and I am connecting to Sage 300 2014 environment.
As it turned out, the security setting was not enabled in the system database to require passwords to log in. Setting that "resolved" the issue and made the password be used. I never did find a way to disconnect from the session so I let it disconnect when I am done with the processing by having the connection go out of scope
Actually, both Session and DBLink implement IDisposable and calling .Dispose (or the using keyword) would be enough to end the session. (I would have wanted to add this as a comment, but couldn't).
I'm using the latest stable built of TweetSharp from codeplex in a VS2008 C# project. I'm writing the project in terms of TwitterService not FluentTwitter.
I have an application that authenticates, then acts as a listener. It sits around and polls Twitter at regular interval looking for direct messages. After I have fetched the latest direct messages (which works fine), I process them and do stuff, then I want to remove them from my inbox, so I never reprocess them again.
The first place I looked was TwitterServer.DeleteDirectMessage(int msgId), however, since I didn't author the DMs, I clearly can't delete them. I know there is a way to do this, because if you log in to the Twitter webpage you can simply delete DMs one by one from your inbox.
Two questions:
1. How to delete DMs from my inbox?
2. Where is complete documentation? (Apologies if this is obvious and I missed it, but it's not under the "Documentation" tab on TweetSharp's codeplex site. The only thing under "Documentation" is several primitive examples.)
//Authenticate...
//Declarations:
string message = null;
List<string> messages = new List<string>();
IEnumerable<TwitterDirectMessage> directMessages = service.ListDirectMessagesReceived();
//Fetch all current direct message:
foreach (TwitterDirectMessage directMessage in directMessages)
{
//Store each message into a list, in reverse older:
message = /*"[" + directMessage.CreatedDate.ToString() + "]" +*/ directMessage.Text;
messages.Insert(0, message);
//Delete each DM to ensure that is is never fetched again:
// ??
}
//Do stuff with DMs
Do you really want to delete DMs from the server? What if the user will go back to twitter.com and want to look them up there?
Another approach could be for you to keep track of DMs that have been displayed before and filter them out later on the client side before re-processing.
I must build a Application that will use Webclient multiple times to retrieve every "t" seconds information from a server.
Here is a small plan to show you what I'm doing in my application:
Connect to the Web Client "USER_LOGIN" that returns me a GUID(user unique ID). I save it and keep it to use it in future Web Client calls.
Connect to the Web Client "USER_GETINFO" using the GUID I saved before as parameter. This Web Service returns an array of strings holding all my personal user information( my Name, Age, Email, etc...). => I save the array information this way: Textblock.Text = e.Result[2].
Starting a Dispatcher.Timer with a 2 seconds Tick to start my Loop. (Purpose of this is to retrieve information and update it every 2 seconds)
Connect to the Web Client "USER GETFRIEND", wich is in my Timer, giving him the GUID as parameter. It returns me an array filled with my friends informations(Name, email, message, etc...). I inserted this WebClient in the timer so my friend list refreshes every 2 seconds.
I am able to create all the steps without any error until step 3. When I call the "USER_GETFRIEND" Web Client I am facing two major problems:
On one side I noticed that my number of Thread increased dramatically. => I always thought that when a WebClient had finished its instructions it would shut down by itself, but apparently that does not happen in Asyncronous calls.
And on the other side I was surprised to see that using the same proxy for two Webclient calls(ie: if i declare test.MainSoapClient proxy = new test.MainSoapClient()), the data i would retrieve from "USER_GETFRIEND" e.Result, was sent directly to my "USER_GETINFO" array. And so my Name and Email adresses on the UI were replaced by the same value in the USER_GETFRIEND array. So my Name is changed to my friends email and so on...
I would like to know if it's possible to close a WebClient call(or Thread) that I am not using anymore to prevent any conflicts? Or if someone has any suggestion concerning my code and the way i should develop my application please feel free to propose.
I got the answer a few weeks ago and figured out it was important to answer my own question.
My whole problem was that I wasn't unsubscribing from my asynchronous calls and that I was using the same proxy class from "Add Service reference":
So when I was using:
proxy.webservice += new Eventhandler<whateverinhere>(my_method);
I never did:
proxy.webservice -= new Eventhandler<whateverinhere>(my_method);
Hope it will help someone.
One of the requirements for the application that I'm working on is to enable users to submit a debugging report to our helpdesk for fatal errors (much like windows error reporting).
I've been told that e-mails must come from a client's mail account to prevent the helpdesk getting spammed and loads of duplicate calls raised.
In order to achieve this, I'm trying to compose a mail message on the server, complete with a nice message in the body for the helpdesk and the error report as an attachment, then add it to the Response so that the user can download, open and send it.
I've tried, without success, to make use of the Outlook Interoperability Component which is a moot point because I've discovered in the last 6 hours of googling that creating more than a few Application instances is very resource intensive.
If you want the user to send an email client side, I don't see how System.Net.Mail will help you.
You have two options:
mailto:support#domain.com?subject=Error&body=Error message here...
get user to download email in some format, open it in their client and send it
Option 1 will probably break down with complex bodies. With Option 2, you need to find a format that is supported by all mail clients (that your users use).
With option 1, you could store the email details locally on your server against some Error ID and just send the email with an Error ID in the subject:
mailto:support#domain.com?subject=Error 987771 encountered
In one of our applications the user hits the generate button and it creates and opens the email in outlook. All they have to do is hit the send button. The functions is below.
public static void generateEmail(string emailTo, string ccTo, string subject, string body, bool bcc)
{
Outlook.Application objOutlook = new Outlook.Application();
Outlook.MailItem mailItem = (Outlook.MailItem)(objOutlook.CreateItem(OlItemType.olMailItem));
/* Sets the recipient e-mails to be either sent by 'To:' or 'BCC:'
* depending on the boolean called 'bcc' passed. */
if (!(bcc))
{
mailItem.To = emailTo;
}
else
{
mailItem.BCC = emailTo;
}
mailItem.CC = ccTo;
mailItem.Subject = subject;
mailItem.Body = body;
mailItem.BodyFormat = OlBodyFormat.olFormatPlain;
mailItem.Display(mailItem);
}
As you can see it is outputting the email in plaintext at the moment because it was required to be blackberry friendly. You can easily change the format to HTML or richtext if you want some formatting options. For HTML use mailItem.HTMLBody
Hope this helps.
EDIT:
I should note that this is used in a C# Application and that it is referencing Microsoft.Office.Core and using Outlook in the Email class the function is located in.
The simple answer is that what you are trying to achieve isn't realistically achievable across all platforms and mail clients. When asked to do the improbable it is wise to come up with an alternative and suggest that.
Assuming that your fault report is only accessible from an error page then you've already got a barrier to spam - unless the spammers can force an exception.
I've always handled this by logging the fault and text into the database and integrating that with a ticketing system. Maybe also have a mailto: as Bruce suggest with subject=ID&body=text to allow the user to send something by email.
I don't think an .eml format file will help either - because they'll need to forward it, and most users would probably get confused.
A .eml is effectively plain text of the message including headers as per RFC-5322.