what is the best way to prevent user double click or refresh page that would cause execute SQL insert statement twice, I've tried to disable the button after click, but the result is not really good. I am expecting that it is possible to do it from code-behind. something more like SQL commit and rollback
Perhaps PRG wikipedia article can help to you:
Post/Redirect/Get (PRG) is a common design pattern for web developers
to help avoid certain duplicate form submissions and allow user agents
to behave more intuitively with bookmarks and the refresh button.
If you wish to protect against this you're going to need the server to be aware that the current user has already begun an action, and can't begin it again until a condition is met.
You need to be able to identify that user amongst the many that are potentially visiting your site. This is most simply done using SessionState, but if you have no other need for SessionState and wish to scale your application massively, a simple random cookie to identify the user can be used as a prefix for any keys that you use to place items into the server cache.
Let's say you used SessionState. You'd do something like the following (pseudo):
public void StartAction()
{
var inProgress = HttpContext.Current.Session["actionInProgress"] as bool;
if (!inProgress)
{
try
{
HttpContext.Current.Session["actionInProgress"] = true;
MySqlController.DoWork();
}
finally
{
HttpContext.Current.Session["actionInProgress"] = false;
}
}
}
The above does not account for the following:
Catching exceptions and/or closing any connections in your finally block
Queueing up subsequent actions as a result of the next clicks on your client (this pseudo-code just returns if the action is already in progress)
I've gone for the simplest solution, but in reality a better practise would be to have this encompassed as a service which runs asynchronously so that you can monitor the progress both for the benefit of the user, and for the prevention of multiple parallel processes.
Related
I have the following controller:
[HttpPost]
public ActionResult SomeMethod(string foo, obj bar)
{
//Some Logic
}
Now suppose that from the view that Action is called from a Button or from Ajax (with some edits), and I don't want to receive a double request.
What is the best approach to handle it from server side?
Update
You'd first have to define the time interval that would meet the
criteria of a double request – Jonesopolis
Let's suppose that in this case a double request are when the difference of time between first and 2nd call is less than 1s
Frankly, you can't, at least not totally. There's certain things you can do server-side, but none are fool-proof. The general idea is that you need to identity the POST is some way. The most common approach is to set a hidden input with a GUID or similar. Then, when a request comes in you record that GUID somewhere. This could be in the session, in a database, etc. Then, before processing the request, you check whatever datastore you're using for that GUID. If it exists, it's a duplicate POST, and if not, you can go ahead.
However, the big stipulation here is that you have to record that somewhere, and do that takes some period of time. It might only be milliseconds, but that could be enough time for a duplicate request to come in, especially if the user is double-clicking a submit button, which is most often the cause of a double-submit.
A web server just reponds to requests as they come in, and importantly, it has multiple threads and perhaps even multiple processes serving requests simultaneously. HTTP is a stateless protocol, so the server doesn't care whether the client has made the same request before, because it effectively doesn't know the client has made the same request before. If two duplicate requests are being served virtually simultaneously on two different threads, then it's a race to see if one can set something identifying the other as a duplicate before the other one checks to see if it's a duplicate. In other words, most of the time, you're just going to be out of luck and both requests will go through no matter what you try to do server-side to stop duplicates.
The only reliable way to prevent double submits is to disable the submit button on submit using JavaScript. Then, the user can effectively only click once, even if they double-click. That still doesn't help you if the user disables JavaScript, of course, but that's becoming more and more rare.
Look. Becareful with this approach. You will add most complexity to control this in server side.
First, you need recognize when the multiple requests are comming from the same user.
I don't think that to control this in server side is the best way.
However, if you really want that... look: https://stackoverflow.com/a/218919/2892830
In this link was suggested maintain a list of token. But, in your case, just check if the same token was received more than one time.
You need at least to implement double click on event listener.
UseSubmitBehiviar="false"
OnClientClick="this.disable='true'; this.value="Please wait";"
Check ASP.NET Life cycle
Check Request/Redirect
Add test code to see who is responsible
if (IsPostBack)
{
_CtrlName = thisPage.Request.Params.Get("__EVENTTARGET");
if (_CtrlName != null && _CtrlName == myButton.ID)
{
//Do your thing
}
}
Check IsPostBack in page load and use it correct to prevent dublicate requests.
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
}
}
Say you have an Action in ASP.NET MVC in a multi-instance environment that looks something like this*:
public void AddLolCat(int userId)
{
var user = _Db.Users.ById(userId);
user.LolCats.Add( new LolCat() );
user.LolCatCount = user.LolCats.Count();
_Db.SaveChanges();
}
When a user repeatedly presses a button or refreshes, race conditions will occur, making it possible that LolCatCount is not similar to the amount of LolCats.
Question
What is the common way to fix these issues? You could fix it client side in JavaScript, but that might not always be possible. I.e. when something happens on a page refresh, or because someone is screwing around in Fiddler.
I guess you have to make some kind of a network based lock?
Do you really have to suffer the extra latency per call?
Can you tell an Action that it is only allowed to be executed once per User?
Is there any common pattern already in place that you can use? Like a Filter or attribute?
Do you return early, or do you really lock the process?
When you return early, is there an 'established' response / response code I should return?
When you use a lock, how do you prevent thread starvation with (semi) long running processes?
* just a stupid example shown for brevity. Real world examples are a lot more complicated.
Answer 1: (The general approach)
If the data store supports transactions you could do the following:
using(var trans = new TransactionScope(.., ..Serializable..)) {
var user = _Db.Users.ById(userId);
user.LolCats.Add( new LolCat() );
user.LolCatCount = user.LolCats.Count();
_Db.SaveChanges();
trans.Complete();
}
this will lock the user record in the database making other requests wait until the transaction has been committed.
Answer 2: (Only possible with single process)
Enabling sessions and using session will cause implicit locking between requests from the same user (session).
Session["TRIGGER_LOCKING"] = true;
Answer 3: (Example specific)
Deduce the number of LolCats from the collection instead of keeping track of it in a separate field and thus avoid inconsistency issues.
Answers to your specific questsions:
I guess you have to make some kind of a network based lock?
yes, database locks are common
Do you really have to suffer the extra latency per call?
say what?
Can you tell an Action that it is only allowed to be executed once per User
You could implement an attribute that uses the implicit session locking or some custom variant of it but that won't work between processes.
Is there any common pattern already in place that you can use? Like a Filter or attribute?
Common practice is to use locks in the database to solve the multi instance issue. No filter or attribute that I know of.
Do you return early, or do you really lock the process?
Depends on your use case. Commonly you wait ("lock the process"). However if your database store supports the async/await pattern you would do something like
var user = await _Db.Users.ByIdAsync(userId);
this will free the thread to do other work while waiting for the lock.
When you return early, is there an 'established' response / response code I should return?
I don't think so, pick something that fits your use case.
When you use a lock, how do you prevent thread starvation with (semi) long running processes?
I guess you should consider using queues.
By "multi-instance" you're obviously referring to a web farm or maybe a web garden situation where just using a mutex or monitor isn't going to be sufficient to serialize requests.
So... do you you have just one database on the back end? Why not just use a database transaction?
It sounds like you probably don't want to force serialized access to this one section of code for all user id's, right? You want to serialize requests per user id?
It seems to me that the right thinking about this is to serialize access to the source data, which is the LolCats records in the database.
I do like the idea of disabling the button or link in the browser for the duration of a request, to prevent the user from hammering away on the button over and over again before previous requests finish processing and return. That seems like an easy enough step with a lot of benefit.
But I doubt that is enough to guarantee the serialized access you want to enforce.
You could also implement shared session state and implement some kind of a lock on a session-based object, but it would probably need to be a collection (of user id's) in order to enforce the serializable-per-user paradigm.
I'd vote for using a database transaction.
I suggest, and personally use mutex on this case.
I have write here : Mutex release issues in ASP.NET C# code , a class that handle mutex but you can make your own.
So base on the class from this answer your code will be look like:
public void AddLolCat(int userId)
{
// I add here some text in front of the number, because I see its an integer
// so its better to make it a little more complex to avoid conflicts
var gl = new MyNamedLock("SiteName." + userId.ToString());
try
{
//Enter lock
if (gl.enterLockWithTimeout())
{
var user = _Db.Users.ById(userId);
user.LolCats.Add( new LolCat() );
user.LolCatCount = user.LolCats.Count();
_Db.SaveChanges();
}
else
{
// log the error
throw new Exception("Failed to enter lock");
}
}
finally
{
//Leave lock
gl.leaveLock();
}
}
Here the lock is base on the user, so different users will not block each other.
About Session Lock
If you use the asp.net session on your call then you may win a free lock "ticket" from the session. The session is lock each call until the page is return.
Read about that on this q/a:
Web app blocked while processing another web app on sharing same session
Does ASP.NET Web Forms prevent a double click submission?
jQuery Ajax calls to web service seem to be synchronous
Well MVC is stateless meaning that you'll have to handle with yourself manually. From a purist perspective I would recommend preventing the multiple presses by using a client-side lock, although my preference is to disable the button and apply an appropriate CSSClass to demonstrate its disabled state. I guess my reasoning is we cannot fully determine the consumer of the action so while you provide the example of Fiddler, there is no way to truly determine whether multiple clicks are applicable or not.
However, if you wanted to pursue a server-side locking mechanism, this article provides an example storing the requester's information in the server-side cache and returns an appropriate response depending on the timeout / actions you would want to implement.
HTH
One possible solution is to avoid the redundancy which can lead to inconsistent data.
i.e. If LolCatCount can be determined at runtime, then determine it at runtime instead of persisting this redundant information.
Ok so I am not very familiar with databases so there may be a simple solution that I am not aware of.
I have a SQL database that is to be managed by a class in my c# application. What I want the class to do is to constantly check the database to see if there is new data. If there is new data, I want it to trigger an event that another class will be listening to. Now I'm guessing that I need to implement a thread that will check the database at every other ms or something. However, what would I need to look for in order to fire my event? Can the database notify the class when there is a new entry?
If you are using MS SQLServer, you can use the SqlDependency class from the .NET Framework to get notifications about database changes.
Maybe other database systems have similar mechanisms in their database driver packages.
If you cannot use that for whatever reason, you will need a Thread to poll the database periodically.
1.If you want the database to inform your Application about a change then you can user Broker(first you enable your database to support Brokers and then you write some code so as to "attach" the Broker.). For your Application you will need SqlDependency Class.
Helpful links:
Enable Broker
Query Notifications in SQL Server
If you want to check multiple Queries then be aware that Broker is a little haevy.
2.If you want your application to do all the work you have to create a function that will check the CKECKSUM for the selected table, each time you will keep the last checksum and if you find any difference then you will "hit" the database to get the new data.
You have to decide who is going to do all your job!
Hope it helps.
Other than using SqlDependency, you can use a Timer, or SqlCacheDependency if you are using ASP.NET or MVC with the Cache object. 1ms intervals are not recommended though as you probably wont complete your check before the next one starts, and your database load will be very high as a result. You could also make sure you use the Timer.AutoReset property so you don't have calls tripping over each other.
Edit 2: This MSDN example shows how you can use SqlDependency, including having to Enable Query Notifications (MSDN). There are many considerations for using SqlDependency, for example it was really designed for web servers where limited watchers would be created, not so much for desktop applications, so keep that in mind. There is a good article on BOL on this called Planning for Notifications which emphasises that Query notifications are useful
if the data in the query changes relatively infrequently, if the application does not require an instantaneous update when the data changes, and if the query meets the requirements and restrictions outlined in Creating a Query for Notification
In your sample you suggest the need for 1ms latency, so maybe the Dependency classes are not the best way for you (also see my later comment on your latency requirement).
EDIT: For example (using the timer):
class Program
{
static void Main(string[] args)
{
Timer timer = new Timer(1);
timer.Elapsed += timer_Elapsed;
timer.AutoReset = false;
timer.Enabled = true;
}
static void timer_Elapsed(object sender, ElapsedEventArgs e)
{
Timer timer = (Timer)sender;
try
{
// do the checks here
}
finally
{
// re=enable the timer to check again very soon
timer.Enabled = true;
}
}
}
As for what to check, it depends on what changes you are actually looking to detect. Here are some ideas:
table row count (but dangerous if a row is added and deleted since the last check)
max value of the table id column (only works if you have a numeric identity field that is increasing, and only works to check for new rows)
check individual columns for changes in specific rows you want to watch
use a row CHECKSUM in a column to check for changes on individual rows
ask writers to update a separate table with a change reference id that you can check
use audit tables to record changes, and check for new audit records
You need to better define the scope of your change monitoring before you can get a good answer to this.
Latency
Also ask yourself if you really need 1ms latency on change updates. If you do, a different approach entirely might be better. For example you may need to use a notification mechanism by the data writers to the parts of your application that need to know an update has occurred right now.
Lately in apps I've been developing I have been checking the number of rows affected by an insert, update, delete to the database and logging an an error if the number is unexpected. For example on a simple insert, update, or delete of one row if any number of rows other than one is returned from an ExecuteNonQuery() call, I will consider that an error and log it. Also, I realize now as I type this that I do not even try to rollback the transaction if that happens, which is not the best practice and should definitely be addressed. Anyways, here's code to illustrate what I mean:
I'll have a data layer function that makes the call to the db:
public static int DLInsert(Person person)
{
Database db = DatabaseFactory.CreateDatabase("dbConnString");
using (DbCommand dbCommand = db.GetStoredProcCommand("dbo.Insert_Person"))
{
db.AddInParameter(dbCommand, "#FirstName", DbType.Byte, person.FirstName);
db.AddInParameter(dbCommand, "#LastName", DbType.String, person.LastName);
db.AddInParameter(dbCommand, "#Address", DbType.Boolean, person.Address);
return db.ExecuteNonQuery(dbCommand);
}
}
Then a business layer call to the data layer function:
public static bool BLInsert(Person person)
{
if (DLInsert(campusRating) != 1)
{
// log exception
return false;
}
return true;
}
And in the code-behind or view (I do both webforms and mvc projects):
if (BLInsert(person))
{
// carry on as normal with whatever other code after successful insert
}
else
{
// throw an exception that directs the user to one of my custom error pages
}
The more I use this type of code, the more I feel like it is overkill. Especially in the code-behind/view. Is there any legitimate reason to think a simple insert, update, or delete wouldn't actually modify the correct number of rows in the database? Is it more plausible to only worry about catching an actual SqlException and then handling that, instead of doing the monotonous check for rows affected every time?
Thanks. Hope you all can help me out.
UPDATE
Thanks everyone for taking the time to answer. I still haven't 100% decided what setup I will use going forward, but here's what I have taken away from all of your responses.
Trust the DB and .Net libraries to handle a query and do their job as they were designed to do.
Use transactions in my stored procedures to rollback the query on any errors and potentially use raiseerror to throw those exceptions back to the .Net code as a SqlException, which could handle these errors with a try/catch. This approach would replace the problematic return code checking.
Would there be any issue with the second bullet point that I am missing?
I guess the question becomes, "Why are you checking this?" If it's just because you don't trust the database to perform the query, then it's probably overkill. However, there could exist a logical reason to perform this check.
For example, I worked at a company once where this method was employed to check for concurrency errors. When a record was fetched from the database to be edited in the application, it would come with a LastModified timestamp. Then the standard CRUD operations in the data access layer would include a WHERE LastMotified=#LastModified clause when doing an UPDATE and check the record modified count. If no record was updated, it would assume a concurrency error had occurred.
I felt it was kind of sloppy for concurrency checking (especially the part about assuming the nature of the error), but it got the job done for the business.
What concerns me more in your example is the structure of how this is being accomplished. The 1 or 0 being returned from the data access code is a "magic number." That should be avoided. It's leaking an implementation detail from the data access code into the business logic code. If you do want to keep using this check, I'd recommend moving the check into the data access code and throwing an exception if it fails. In general, return codes should be avoided.
Edit: I just noticed a potentially harmful bug in your code as well, related to my last point above. What if more than one record is changed? It probably won't happen on an INSERT, but could easily happen on an UPDATE. Other parts of the code might assume that != 1 means no record was changed. That could make debugging very problematic :)
On the one hand, most of the time everything should behave the way you expect, and on those times the additional checks don't add anything to your application. On the other hand, if something does go wrong, not knowing about it means that the problem may become quite large before you notice it. In my opinion, the little bit of additional protection is worth the little bit of extra effort, especially if you implement a rollback on failure. It's kinda like an airbag in your car... it doesn't really serve a purpose if you never crash, but if you do it could save your life.
I've always prefered to raiserror in my sproc and handle exceptions rather than counting. This way, if you update a sproc to do something else, like logging/auditing, you don't have to worry about keeping the row counts in check.
Though if you like the second check in your code or would prefer not to deal with exceptions/raiserror, I've seen teams return 0 on successful sproc executions for every sproc in the db, and return another number otherwise.
It is absolutely overkill. You should trust that your core platform (.Net libraries, Sql Server) work correctly -you shouldn't be worrying about that.
Now, there are some related instances where you might want to test, like if transactions are correctly rolled back, etc.
If there's is a need for that check, why not do that check within the database itself? You save yourself from doing a round trip and it's done at a more 'centralized' stage - If you check in the database, you can be assured it's being applied consistently from any application that hits that database. Whereas if you put the logic in the UI, then you need to make sure that any UI application that hits that particular database applies the correct logic and does it consistently.
My question is a little bit tricky. at least I think. Maybe not, anyway. I want to know if there's any way to know if the user is leaving the page. Whatever if he clicks "Previous button", closing the window or ckicking on a link on my website. If my memory's still good, I think it's possible with JavaScript.
But in my case, I want to do some stuff (cleaning objects) in my codebehind.
There really is no way to do it. No event is fired when the browser goes back.
You can do it with Javascript, but it is difficult at best.
See the question here.
This script will also work. It was found here
<script>
window.onbeforeunload = function (evt) {
var message = ‘Are you sure you want to leave?’;
if (typeof evt == ‘undefined’) {
evt = window.event;
}
if (evt) {
evt.returnValue = message;
}
return message;
}
</script>
You can use javascript to tell the server when the user leaves the page. But the webserver washes it's hands of the page once it leaves the server, while the user might keep the page open for a week.
If you use javascript on the page to fire off a notice to your server when the page unloads you can take some action. But, you can't tell if he's leaving your page for another one of your pages, another website, or closing the browser.
And that last notice isn't guaranteed to always be sent, so you can't rely on it completely.
So using the javascript notice to clean up objects (caches or sessions) is a flawed system. You're better with cache & session invalidation strategies that are independent of the onunload notice.
If the goal of this is to cleanup objects in your code-behind, is it good enough to rely on the session timeout? It would be a 20 minute delay before cleaning up those objects (or whatever you have your session timeout set for).
But it's an easy and dependable way to do it. In your Global.asax file you'd just do this:
Sub Session_End(ByVal sender As Object, ByVal e As EventArgs)
' Clean up stuff
End Sub
As mentioned, you can do it unreliably in JavaScript, but I am assuming from the wording of your question that you want to do it in the Server-side code ("the code-behind").
That is impossible since the code-behind is running on a different computer (the web server) and has no idea what is going on at the user's computer unless the browser sends a request back to the server (which requires some Javascript or other client side code).
If you must, use the session timeout events to clean up any state information you are storing beyond the lifetime of a request.
Euuwww ... well ok. My problem it's because I've stored in a Session variable an objects (with a bunch of search criterias). By storing it in a session variable, if the user go back to my search page result, it will list back the same result as last time. But, after some reflexion (not in code but in my head and after seen your answer :oP), I will add a boolean property in my object which will mean something like "IsDirty" or "HasBeenListed". And when i will load back, if its set to dirty, it's gonna redirect back to another page.
Thank David.