Lock web service method for one client - c#

This is my scenario: I'm working on an e-commerce system (ASP.NET MVC) which users can 'Like' products. I have a method in my web service (.svc) which if the user has already liked that product, it removes the like, and if the user has not, it adds a like for product.
I have an issue here: I call the 'Like' method via Ajax asynchronously, if the user clicks on the like button multiple times continuously, so, the method is called multiple times continuously. since this method is not thread-safe, I might have issues here. (For example, it might insert the like multiple times for one single user in the database)
I've been trying to make it Thread-safe by using lock statement, but apparently, It still doesn't work.
This is my web service method:
private static Object productLikeLock = new Object();
[WebMethod, ScriptMethod(ResponseFormat = ResponseFormat.Json)]
public bool ToggleProductLike(int productId)
{
try
{
if (User.Identity.IsAuthenticated)
{
lock (productLikeLock)
{
if (/* User has already liked the product */)
{
// Remove it from the database
return true;
}
else
{
// Add it to the database
return true;
}
}
}
return false;
}
catch (Exception ex)
{
return false;
}
}
Can anyone help me and let me know what is wrong with my code and approach?
Thanks in advance

What you really want is the lock to only be active for the user, not for the whole application. A private static is not a good fit for the lock here. You could potentially use a Dictionary<UserId, LockObject> for the lock, but this is still not great.
What I would suggest is not trying to implement this function as a toggle. Toggling state will always get you into this sort of trouble.
Try re-engineering the application to have an explicit Like and Dislike action. That way you don't have to lock anything.

Related

Enable ASP.Net Core Session Locking?

According to the ASP.Net Core docs, the behaviour of the session state has changed in that it is now non-locking:
Session state is non-locking. If two requests simultaneously attempt to modify the contents of a session, the last request overrides the first. Session is implemented as a coherent session, which means that all the contents are stored together. When two requests seek to modify different session values, the last request may override session changes made by the first.
My understanding is that this is different to the behaviour of the session in the .Net Framework, where the user's session was locked per request so that whenever you read from/wrote to it, you weren't overwriting another request's data or reading stale data, for that user.
My question(s):
Is there a way to re-enable this per-request locking of the user's session in .Net Core?
If not, is there a reliable way to use the session to prevent duplicate submission of data for a given user? To give a specific example, we have a payment process that involves the user returning from an externally hosted ThreeDSecure (3DS) iFrame (payment card security process). We are noticing that sometimes (somehow) the user is submitting the form within the iFrame multiple times, which we have no control over. As a result this triggers multiple callbacks to our application. In our previous .Net Framework app, we used the session to indicate if a payment was in progress. If this flag was set in the session and you hit the 3DS callback again, the app would stop you proceeding. However, now it seems that because the session isn't locked, when these near simultaneous, duplicate callbacks occur, thread 'A' sets 'payment in progress = true' but thread 'B' doesn't see that in time, it's snapshot of the session is still seeing 'payment in progress = false' and the callback logic is processed twice.
What are some good approaches to handling simultaneous requests accessing the same session, now that the way the session works has changed?
The problem that you have faced with is called Race Condition (stackoverflow, wiki). To cut-through, you'd like to get exclusive access to the session state, you can achieve that in several ways and they highly depend on your architecture.
In-process synchronization
If you have a single machine with a single process handling all requests (for example you use a self-hosted server, Kestrel), you may use lock. Just do it correctly and not how #TMG suggested.
Here is an implementation reference:
Use single global object to lock all threads:
private static object s_locker = new object();
public bool Process(string transaction) {
lock (s_locker) {
if(!HttpContext.Session.TryGetValue("TransactionId", out _)) {
... handle transaction
}
}
}
Pros: a simple solution
Cons: all requests from all users will wait on this lock
use per-session lock object. Idea is similar, but instead of a single object you just use a dictionary:
internal class LockTracker : IDisposable
{
private static Dictionary<string, LockTracker> _locks = new Dictionary<string, LockTracker>();
private int _activeUses = 0;
private readonly string _id;
private LockTracker(string id) => _id = id;
public static LockTracker Get(string id)
{
lock(_locks)
{
if(!_locks.ContainsKey(id))
_locks.Add(id, new LockTracker(id));
var res = _locks[id];
res._activeUses += 1;
return res;
}
}
void IDisposable.Dispose()
{
lock(_locks)
{
_activeUses--;
if(_activeUses == 0)
_locks.Remove(_id);
}
}
}
public bool Process(string transaction)
{
var session = HttpContext.Session;
var locker = LockTracker.Get(session.Id);
using(locker) // remove object after execution if no other sessions use it
lock (locker) // synchronize threads on session specific object
{
// check if current session has already transaction in progress
var transactionInProgress = session.TryGetValue("TransactionId", out _);
if (!transactionInProgress)
{
// if there is no transaction, set and handle it
HttpContext.Session.Set("TransactionId", System.Text.Encoding.UTF8.GetBytes(transaction));
HttpContext.Session.Set("StartTransaction", BitConverter.GetBytes(DateTimeOffset.UtcNow.ToUnixTimeSeconds()));
// handle transaction here
}
// return whatever you need, here is just a boolean.
return transactionInProgress;
}
}
Pros: manages concurrency on the session level
Cons: more complex solution
Remember that lock-based option will work only when the same process on the webserver handling all user's requests - lock is intra-process synchronization mechanism! Depending on what you use as a persistent layer for sessions (like NCache or Redis), this option might be the most performant though.
Cross-process synchronization
If there are several processes on the machine (for example you have IIS and apppool is configured to run multiple worker processes), then you need to use kernel-level synchronization primitive, like Mutex.
Cross-machine synchronization
If you have a load balancer (LB) in front of your webfarm so that any of N machines can handle user's request, then getting exclusive access is not so trivial.
One option here is to simplify the problem by enabling the 'sticky session' option in your LB so that all requests from the same user (session) will be routed to the same machine. In this case, you are fine to use any cross-process or in-process synchronization option (depends on what you have running there).
Another option is to externalize synchronization, for example, move it to the transactional DB, something similar to what #HoomanBahreini suggested. Beware that you need to be very cautious on handling failure scenarios: you may mark your session as in progress and then your webserver which handled it crashed leaving it locked in DB.
Important
In all of these options you must ensure that you obtain lock before reading the state and hold it until you update the state.
Please clarify what option is the closest to your case and I can provide more technical details.
Session is designed to store temporary user data among multiple requests, a good example is login-state... without session you would have to login to stackoverflow.com every time you open a new question... but the website remembers you, because your send them your session state inside a cookie. According to Microsoft:
The session data is backed by a cache and considered ephemeral data.
The site should continue to function without the session data.
Critical application data should be stored in the user database and
cached in session only as a performance optimization.
It is quite simple to implement a locking mechanism to solve your mutex issue, however the session itself is not a reliable storage and you may loose its content at any time.
How to identify duplicate payments?
The problem is you are getting multiple payment requests and you want to discard the duplicate ones... what's your definition of a duplicate payment?
Your current solution discard the second payment while a first one is in progress... let's say your payment takes 2 seconds to complete... what will happen if you receive the duplicate payment after 3 seconds?
Every reliable payment system includes a unique PaymentId in their request... what you need to do is to mark this PaymentId as processed in your DB. This way you won't process the same payment twice, no matter when the duplicate request arrives.
You can use a Unique Constraint on PaymentId to prevent duplicate payments:
public bool ProcessPayment(Payment payment) {
bool res = InsertIntoDb(payment);
if (res == false) {
return false; // <-- insert has failed because PaymentId is not unique
}
Process(payment);
return true;
}
Same example using lock:
public class SafePayment {
private static readonly Object lockObject = new Object();
public bool ProcessPayment(Payment payment) {
lock (lockObject) {
var duplicatePayment = ReadFromDb(payment.Id);
if (duplicatePayment != null) {
return false; // <-- duplicate
}
Process(payment);
WriteToDb(payment);
return true;
}
}
}

What is good approach to validate http request in .NET Core 3.1 API's?

I am new to building API's. My project contains three typical layers: controllers, services responsible for business logic, and repositories which are accessing data. Every request coming to my controllers have to go through some validation process before a specific action is performed. For an example, please inspect the code below:
[HttpPost]
public async Task<ActionResult<TicketDTO>> CreateTicketAsync([FromBody] CreateTicketDTO ticket)
{
try
{
if (ticket.Events == null)
{
return BadRequest(new {Message = _localizer["LackOfEventsErrorMessage"].Value});
}
var user = await _userService.GetUserByIdAsync(ticket.UserId);
if (user == null)
{
return NotFound(new { Message = _localizer["UserNotFoundErrorMessage", ticket.UserId].Value });
}
var invalidTicket = await _ticketService.CheckHasUserPlayedAnyOfGamesBeforeAsync(ticket);
if (invalidTicket)
{
return BadRequest(new { Message = _localizer["EventsRepeatedByUserErrorMessage"].Value });
}
var createdTicket = await _ticketService.AddTicketAsync(ticket);
if (createdTicket == null)
{
return BadRequest(new { Message = _localizer["TicketNotCreatedErrorMessage"].Value });
}
return CreatedAtAction(nameof(GetTicketById), new {ticketId = createdTicket.TicketId}, createdTicket);
}
catch (Exception ex)
{
return StatusCode(StatusCodes.Status500InternalServerError,
new
{
Message = ex.InnerException != null
? $"{ex.Message} {ex.InnerException.Message}"
: ex.Message
});
}
}
This is one of my controller methods. Before the ticket is saved to database, it has to pass few checks. The owner of the ticket must exist, if not i return user not found etc. The problem is I do not really like this way of validating requests. The method is messy, and not very readable. I would like to know what is a good approach to validate requests, and react properly if something wents wrong (for example return "UserNotFoundErrorMessage" if there is no user in a database, etc. single catch block doesn't solve my problem. I wouldn't also like to have multiple catch blocks there, it's also messy i think. Am i wrong?)
I wonder does the attached snippet violate some clean code rules or not? How the code should look like? What I am doing wrong?
All of this logic should be shuffled into your business layer, i.e. your services. The service methods, then, should return a "result" class, which is basically just a way of sending multiple bits of information back as the return, i.e. success/fail status, errors, if any, the actual result in the case of a query or something, etc. Then, you can simply switch on the error and respond accordingly.
As far as the catches go, especially the main one that simply returns a 500, use a global exception handler. Let the error bubble up from the action and rely on the global handler to return an appropriate response.
Like others have already pointed out, this does not seem half bad.
I can tell you as much that we have snippets of code that are 10 times the size of this. Tbh, this seems like small part compared to some modules i've seen in my company's codebase.
That being said, you could move a bit more logic away from the controller, and to other layers. For example, when getting a user by its Id, you can also throw a not found exception from your serviceclass if an user by that id does not exist. you have now stuffed everything into a controller, whilst it feels like this is more the resposibility of the service.
Another thing you could do is perhaps use middleware:
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/middleware/?view=aspnetcore-3.1
You can create validation pipelines for your response.
I've also tried working with a validation pattern. In this case i would create some rules for checks, and applied these rules to stuff to validate. I then had a validator object that would take all the rules an prompt an appropriate output. This made the code cleaner and resuse better, however added some complexity and i ended up not using it. It was different than the rest of the codebase and therefore foreign to colleagues, and thus i had good argument not to use it.

SimpleMembershipProvider intermittently returning wrong user

I am administrator of a small practice project web application, AngularJS front-end pulling its back-end data from a C#/.NET WebAPI, and I'm handling security using the SimpleMembershipProvider.
I suspect that the way I implemented said security is not the best (I'm told ASP.NET Identity is now the way to go?) but that's another question altogether.
The issue that I'm very bewilderingly running into is that I get occasional reports that on a given page load to display a particular user's data, it returns somebody else's. Reloading the page fixes the issue (evidently) and I haven't been able to duplicate the scenario myself, or figure out anything particularly consistent in the users to which this happens.
None of the information being displayed is at all sensitive in nature (the app's just a friendly front end for an already public third-party API) so I'm not in panic mode about this, but I am both concerned and confused and want it fixed.
Here is what one of my API controller endpoints looks like:
[Authorize]
public class UserController : ApiController
{
private static int _userId;
private readonly IUserProfileRepository _userProfileRepository;
public UserController()
{
_userProfileRepository = new UserProfileRepository(new DatabaseContext());
_userId = WebSecurity.GetUserId(User.Identity.Name);
}
public UserProfileDto Get()
{
return _userProfileRepository.GetUserProfileById(_userId).ToDto();
}
}
Any feedback on where I might be going wrong here or what might be causing the intermittant inconsistency would be very much appreciated. (Laughter also acceptable if the way I handled this is just really bad. :P )
Static class fields are shared by all instances/threads of the same AppDomain (in your case - process). Different http requests are processed by threads running in parallel. Any two threads running [almost] at the same time may (will) change the value of _userId. You are assigning _userId in the constructor of your controller, and a new instance of this controller is created for each http request that is to be responded to by UserController. Therefore, this assignment will happen multiple times.
You will have hard time replicating this problem, since you are a single user testing the code, hence there are no overlapping request threads.
Remove static specifier from the _userId field declaration of the controller class.
Note: make sure that DatabaseContext is disposed of. One place that can be used for this is the overriden Controller.Dispose.
Change the Get to retrieve the user id rather than from a static variable:
public UserProfileDto Get()
{
return _userProfileRepository.GetUserProfileById(WebSecurity.GetUserId(User.Identity.Name)).ToDto();
}

MVVM and asynchronous data access

So I have a WPF application using the MVVM pattern (Caliburn.Micro). I got the views and view-models wired up and basicly what is missing is the data. The data is to be retrieved "on-demand" either from a WCF service, local storage or from memory/cache - reason being to allow for offline-mode and to avoid uneccessary server communication. Another requirement is that the data is retrieved asynchronously so the UI thread is not blocked.
So I was thinking to create some kind of "AssetManager" that the viewmodels use to request data:
_someAssetManager.GetSomeSpecificAsset(assetId, OnGetSomeSpecificAssetCompleted)
Note that it is an asynchronous call. I run into a few different problems though. If the same asset is requested at (roughly) the same time by different view-models, how do we ensure that we don't do unecessary work and that they both get the same objects that we can bind against?
Not sure I'm having the right approach. I've been glancing a bit at Reactive Framework - but I have no idea how to use it in this scenario. Any suggestions on framework/techniques/patterns that I can use? This seems to be a rather common scenario.
Dictionary<int, IObservable<IAsset>> inflightRequests;
public IObservable<IAsset> GetSomeAsset(int id)
{
// People who ask for an inflight request just get the
// existing one
lock(inflightRequests) {
if inflightRequests.ContainsKey(id) {
return inflightRequests[id];
}
}
// Create a new IObservable and put in the dictionary
lock(inflightRequests) { inflightRequests[id] = ret; }
// Actually do the request and "play it" onto the Subject.
var ret = new AsyncSubject<IAsset>();
GetSomeAssetForReals(id, result => {
ret.OnNext(id);
ret.OnCompleted();
// We're not inflight anymore, remove the item
lock(inflightRequests) { inflightRequests.Remove(id); }
})
return ret;
}
I've had success with method calls that pass in a delegate that gets called when the data is received. You could layer the requirement of keeping everyone with the same data (if a request is currently happening) by checking a boolean field that determines if a request is happening. I would keep a local collection of delegates that need calling so that when the data is finally received, the class that contains the delegates to call can iterate them, passing in the newly received data.
Something along these lines:
public interface IViewModelDataLoader{
void LoadData(AssignData callback);
}
public delegate void AssignData(IEnumerable<DataObject> results);
The class that actually implements this interface could then keep a running tally on who to notify when the data is done (assuming a singleton model):
public class ViewModelDataLoader : IViewModelDataLoader{
private IList<AssignData> callbacksToCall;
private bool isLoading;
public void LoadData(AssignData callback){
callbacksToCall.add(callback);
if (isLoading) { return; }
// Do some long running code here
var data = something;
// Now iterate the list
foreach(var item in callbacksToCall){
item(data);
}
isLoading = false;
}
}
Using the proxy pattern and events you can provide both synchronous and asynchronous data. Have your proxy returned cached values for synchronous calls and also notify view models via events when your receive asynchronous data. The proxy can also be designed to track data requests and throttle server connections (eg 'reference counting' calls, data requested/data received flags, etc)
I would set up you AssetManager like this:
public interface IAssetManager
{
IObservable<IAsset> GetSomeSpecificAsset(int assetId);
}
Internally you would need to return a Subject<IAsset> that you populate asynchronously. Do it right and you only have a single call for each call to GetSomeSpecificAsset.

Fire and Forget (Asynch) ASP.NET Method Call

We have a service to update customer information to server. One service call takes around few seconds, which is normal.
Now we have a new page where at one instance around 35-50 Costumers information can be updated. Changing service interface to accept all customers together is out of question at this point.
I need to call a method (say "ProcessCustomerInfo"), which will loop through customers information and call web service 35-50 times. Calling service asynchronously is not of much use.
I need to call the method "ProcessCustomerInfo" asynchronously. I am trying to use RegisterAsyncTask for this. There are various examples available on web, but the problem is after initiating this call if I move away from this page, the processing stops.
Is it possible to implement Fire and Forget method call so that user can move away (Redirect to another page) from the page without stopping method processing?
Details on: http://www.codeproject.com/KB/cs/AsyncMethodInvocation.aspx
Basically you can create a delegate which points to the method you want to run asynchronously and then kick it off with BeginInvoke.
// Declare the delegate - name it whatever you would like
public delegate void ProcessCustomerInfoDelegate();
// Instantiate the delegate and kick it off with BeginInvoke
ProcessCustomerInfoDelegate d = new ProcessCustomerInfoDelegate(ProcessCustomerInfo);
simpleDelegate.BeginInvoke(null, null);
// The method which will run Asynchronously
void ProcessCustomerInfo()
{
// this is where you can call your webservice 50 times
}
This was something I whipped just to do that...
public class DoAsAsync
{
private Action action;
private bool ended;
public DoAsAsync(Action action)
{
this.action = action;
}
public void Execute()
{
action.BeginInvoke(new AsyncCallback(End), null);
}
private void End(IAsyncResult result)
{
if (ended)
return;
try
{
((Action)((AsyncResult)result).AsyncDelegate).EndInvoke(result);
}
catch
{
/* do something */
}
finally
{
ended = true;
}
}
}
And then
new DoAsAsync(ProcessCustomerInfo).Execute();
Also need to set the Async property in the Page directive <%# Page Async="true" %>
I'm not sure exactly how reliable this is, however it did work for what I needed it for. Wrote this maybe a year ago.
I believe the issue is the fact is your web service is expecting a client to return the response to, that the service call itself is not a one way communication.
If you're using WCF for your webservices look at http://moustafa-arafa.blogspot.com/2007/08/oneway-operation-in-wcf.html for making a one way service call.
My two cents: IMO whoever put the construct on you that you're not able to alter the service interface to add a new service method is the one making unreasonable demands. Even if your service is a publicly consumed API adding a new service method shouldn't impact any existing consumers.
Sure you can.
I think what you are wanting is a true background thread:
Safely running background threads in ASP.NET 2.0
Creating a Background Thread to Log IP Information

Categories