Avoid fast post on webapi c# - c#

I have problem in when user post the data. Some times the post run so fast and this make problem in my website.
The user want to register a form about 100$ and have 120$ balance.
When the post (save) button pressed sometimes two post come to server very fast like:
2018-01-31 19:34:43.660 Register Form 5760$
2018-01-31 19:34:43.663 Register Form 5760$
Therefore my client balance become negative.
I use If in my code to check balance but the code run many fast and I think both if happen together and I missed them.
Therefore I made Lock Controll class to avoid concurrency per user but not work well.
I made global Action Filter to control the users this is my code:
public void OnActionExecuting(ActionExecutingContext context)
{
try
{
var controller = (Controller)context.Controller;
if (controller.User.Identity.IsAuthenticated)
{
bool jobDone = false;
int delay = 0;
int counter = 0;
do
{
delay = LockControllers.IsRequested(controller.User.Identity.Name);
if (delay == 0)
{
LockControllers.AddUser(controller.User.Identity.Name);
jobDone = true;
}
else
{
counter++;
System.Threading.Thread.Sleep(delay);
}
if (counter >= 10000)
{
context.HttpContext.Response.StatusCode = 400;
jobDone = true;
context.Result = new ContentResult()
{
Content = "Attack Detected"
};
}
} while (!jobDone);
}
}
catch (System.Exception)
{
}
}
public void OnActionExecuted(ActionExecutedContext context)
{
try
{
var controller = (Controller)context.Controller;
if (controller.User.Identity.IsAuthenticated)
{
LockControllers.RemoveUser(controller.User.Identity.Name);
}
}
catch (System.Exception)
{
}
}
I made list static list of user and sleep their thread until previous task happen.
Is there any better way to manage this problem?

So the original question has been edited so this answer is invalid.
so the issue isn't that the code runs too fast. Fast is always good :) The issue is that the account is going into negative funds. If the client decides to post a form twice that is the clients fault. It maybe that you only want the client to pay only once which is an other problem.
So for the first problem, I would recommend a using transactions (https://en.wikipedia.org/wiki/Database_transaction) to lock your table. Which means that the add update/add a change (or set of changes) and you force other calls to that table to wait until those operations have been done. You can always begin your transaction and check that the account has the correct amount of funds.
If the case is that they are only ever meant to pay once then.. then have a separate table that records if the user has payed (again within a transaction), before processing the update/add.
http://www.entityframeworktutorial.net/entityframework6/transaction-in-entity-framework.aspx
(Edit: fixing link)

You have a few options here
You implement ETag functionality in your app which you can use for optimistic concurrency. This works well, when you are working with records, i.e. you have a database with a data record, return that to the user and then the user changes it.
You could add an required field with a guid to your view model which you pass to your app and add it to in memory cache and check it on each request.
public class RegisterViewModel
{
[Required]
public Guid Id { get; set; }
/* other properties here */
...
}
and then use IMemoryCache or IDistributedMemoryCache (see ASP.NET Core Docs) to put this Id into the memory cache and validate it on request
public Task<IActioNResult> Register(RegisterViewModel register)
{
if(!ModelState.IsValid)
return BadRequest(ModelState);
var userId = ...; /* get userId */
if(_cache.TryGetValue($"Registration-{userId}", register.Id))
{
return BadRequest(new { ErrorMessage = "Command already recieved by this user" });
}
// Set cache options.
var cacheEntryOptions = new MemoryCacheEntryOptions()
// Keep in cache for 5 minutes, reset time if accessed.
.SetSlidingExpiration(TimeSpan.FromMinutes(5));
// when we're here, the command wasn't executed before, so we save the key in the cache
_cache.Set($"Registration-{userId}", register.Id, cacheEntryOptions );
// call your service here to process it
registrationService.Register(...);
}
When the second request arrives, the value will already be in the (distributed) memory cache and the operation will fail.
If the caller do not sets the Id, validation will fail.
Of course all that Jonathan Hickey listed in his answer below applies to, you should always validate that there is enough balance and use EF-Cores optimistic or pessimistic concurrency

Related

Thread safe WebApi put requests

I have a webapi and I want to make my logic inside this controller thread safe.
I want user can only update payroll when the last one updated and two update at the same time should not be happend.
As you can see in the code, I added a column in Payroll entity with the name of IsLock as boolean and try to handle multiple request for update in this way but it is not thread-safe.
How can I make it thread-safe ?
[HttpPut("{year}/{month}")]
public async Task<NoContentResult> Approve([FromRoute] int year, [FromRoute] int month)
{
var payroll = _dataContext.Payrolls
.SingleOrDefaultAsync(p =>
p.Month == month && p.Year == year);
if (payroll.IsLock)
{
throw new ValidationException(
$"The payroll {payroll.Id} is locked.");
}
try
{
payroll.IsLock = true;
_dataContext.Payrolls.Update(payroll);
await _dataContext.SaveChangesAsync(cancellationToken);
payroll.Status = PayrollStatus.Approved;
_dataContext.Payrolls.Update(payroll);
await _dataContext.SaveChangesAsync(cancellationToken);
payroll.IsLock = false;
_dataContext.Payrolls.Update(payroll);
await _dataContext.SaveChangesAsync(cancellationToken);
return NoContent();
}
catch (Exception)
{
payroll.IsLock = false;
_dataContext.Payrolls.Update(payroll);
await _dataContext.SaveChangesAsync(cancellationToken);
throw;
}
}
You are looking for Concurrency Tokens. Each row in the payroll table would have one. When a user loaded the edit interface for a payroll, the concurrency token would be sent to the client. The client would include the concurrency token in the request to update the payroll. The update would only succeed of the concurrency token had not changed - meaning that the data had not changed since the user fetched it to start the edit.
Entity Framework uses the concurrency tokens internally, as well, so it won't save changes from a stale entity (where the data has changed since it was loaded).
The current IsLocked solution has some flaws. If two API requests are received at the same time, both may read the payroll data and see that it isn't locked. Both requests would then lock the row, make competing changes, and release the lock without ever realizing there were simultaneous edits.

ASP.Net core sometimes the action controller is called twice

I have an application is ASP.net core and have integrated spreedly payment gateway for processing the payments. In my log files I can see that sometimes the payment controller executes twice. I generate an ID based on the time the request was received and the ID's are sometimes apart by 1 sec or at sometimes they are at the exact same time. This is resulting in charging the card twice only for few cases when this is happening. I cant seem to figure out what could be triggering this.
Following is the code that I am using
The user fills the application form and on the pay button click I am using this code to trigger spreedly
$('#REG').click(function () {
var options = {
company_name: "abcd",
sidebar_top_description: "Fees",
sidebar_bottom_description: "Only Visa and Mastercard accepted",
amount: "#string.Format("{0:c}",Convert.ToDecimal(Model.FeeOutstanding))"
}
document.getElementById('payment').value = 'App'
SpreedlyExpress.init(environmentKey, options);
SpreedlyExpress.openView();
$('#spreedly-modal-overlay').css({ "position": "fixed", "z-index": "9999", "bottom": "0", "top": "0", "right": "0", "left": "0" });
});
This opens the spreedly payment form as a popup where the user enters all the card information and hits the pay button. Which executes the payment controller
public async Task<IActionResult> Index(DynamicViewModel model)
{
if (ModelState.IsValid)
{
try
{
if (TempData.ContainsKey("PaymentFlag") && !String.IsNullOrEmpty(TempData["PaymentFlag"].ToString()))
{
// Some code logic that calls few async methods
//generate a id based on the time of current request
"APP-" + DateTime.Now.ToString("yyyyMMddHmmss-") + model.UserID;
// ... Other code here
}
The id that I generate is logged and I can see that some times in the log file that it ran twice for a customer with the ID having either the exact same time or there was a 1 sec difference. I have tested the double click scenario and also have put in some code to prevent double clicks. But still I cant seem to understand why sometimes this happens. It is not frequent. Its like 1 case that happens in 100 payments.
I have an action attribute to handle the duplicate requests. After putting in this code it did stopped the number of duplicate requests but not completely. Still in few cases some how the controllers gets called twice.
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public class NoDuplicateRequestAttribute : ActionFilterAttribute
{
public int DelayRequest = 10;
// The Error Message that will be displayed in case of
// excessive Requests
public string ErrorMessage = "Excessive Request Attempts Detected.";
// This will store the URL to Redirect errors to
public string RedirectURL;
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
// Store our HttpContext (for easier reference and code brevity)
var request = filterContext.HttpContext.Request;
// Store our HttpContext.Cache (for easier reference and code brevity)
var cache = filterContext.HttpContext.RequestServices.GetService<IMemoryCache>();
// Grab the IP Address from the originating Request (example)
var originationInfo = request.HttpContext.Connection.RemoteIpAddress.ToString() ?? request.HttpContext.Features.Get<IHttpConnectionFeature>()?.RemoteIpAddress.ToString();
// Append the User Agent
originationInfo += request.Headers["User-Agent"].ToString();
// Now we just need the target URL Information
var targetInfo = request.HttpContext.Request.GetDisplayUrl() + request.QueryString;
// Generate a hash for your strings (appends each of the bytes of
// the value into a single hashed string
var hashValue = string.Join("", MD5.Create().ComputeHash(Encoding.ASCII.GetBytes(originationInfo + targetInfo)).Select(s => s.ToString("x2")));
string cachedHash;
// Checks if the hashed value is contained in the Cache (indicating a repeat request)
if (cache.TryGetValue(hashValue,out cachedHash))
{
// Adds the Error Message to the Model and Redirect
}
else
{
// Adds an empty object to the cache using the hashValue
// to a key (This sets the expiration that will determine
// if the Request is valid or not)
var opts = new MemoryCacheEntryOptions()
{
SlidingExpiration = TimeSpan.FromSeconds(DelayRequest)
};
cache.Set(hashValue,cachedHash,opts);
}
base.OnActionExecuting(filterContext);
}
This isn't an ASP.NET Core issue. I'm 99% certain there are in fact multiple requests coming from the client and ASP.NET Core is simply handling them as it is meant to.
An option for you would be to put a guid or other identifier on the page and send it with the request. In your Controller, check your cache or session to see if that identifier already exists. If it does, throw an exception or return Ok() or log the occurrence or whatever you want to do in that case but don't charge the card.

How to get instant feedback on a change in CQRS

Let's assume I have the next pseudo code to implement Command based change in terms of CQRS (actually, Event Sourcing is questionable as well) in my WebApi project:
public IHttpActionResult ChangeVendor(ChangeVendorModel changeModel)
{
/* 1 */ // user input validation
/* 2 */ var changeCommand = changeModel.MapTo<ChangeVendorCommand>();
/* 3 */ bus.Send(changeCommand); // start the change processing
/* 4 */ return Ok();
}
The explanation:
We perform a basic user input validation (as string length or only
positive numbers) but not a business validation (as this Vendor is
in the black list).
We convert the input model to a command for a bus.
We send the prepared change command through the bus to be processed.
By this we mean the change was applied and a domain model is
available for the further manipulations.
The questions:
a. The bus processing is asynchronous. How can I be sure (after step 4)
that my changes were applied and the app is ready to render success
view displaying a changed record from a database designed for
querying purposes?
b. Let's say the record version conflict happened (data violation) or a model was not passed through the business rules (domain violation). How can I instantly notify a user about this from the bus? In a bad designed system, a user could see a successful result because we successfully scheduled the change on the bus and later on they could see the notification with an error when the attempt to apply the actual change was made.
As I suggested in the comments you could wait for the event which signals completion and only return to the user when this is received. Some pseudo code:
public IHttpActionResult ChangeVendor(ChangeVendorModel changeModel)
{
var changeCommand = changeModel.MapTo<ChangeVendorCommand>();
bus.Send(changeCommand); // start the change processing
var replyReceived=false;
bool success = false;
while(!replyreceived)
{
Task vendorChanged = Task.Factory.StartNew(()=>
{
var reply=bus.Receive<VendorChanged>());
if(reply.CorrelationToken==changeCommand.CorrelationToken)
{
replyReceived=true;
success=true;
}
},SomeTimeout);
Task vendorChangedFailed = Task.Factory.StartNew(()=>
{
var reply=bus.Receive<VendorChangeFailed>());
if(reply.CorrelationToken==changeCommand.CorrelationToken)
{
replyReceived=true;
success=false;
}
},SomeTimeout());
Task.WaitAny(new Task[]{vendorChanged,vendorChangeFailed});
}
if(success)
{
return Ok();
}
else
{
return ChangeVendorFailed();
}
}
obviously the receive needs to be on its own subscription, to ensure it doesn't take replies for other instances, and you may be able to create the subscription to receive only messages with the correct correlation token or other identifying property, but this gives you sopme idea of one way to skin this cat and make your tasky async workflow look syncronous to the user

Handle multiple similar requests in webapi

In my WebApi controller I have the following (pseudo) code that receives update notifications from Instagrams real-time API:
[HttpPost]
public void Post(InstagramUpdate instagramUpdate)
{
var subscriptionId = instagramUpdate.SubscriptionId;
var lastUpdate = GetLastUpdate(subscriptionId);
// To avoid breaking my Instagram request limit, do not fetch new images too often.
if (lastUpdate.AddSeconds(5) < DateTime.UtcNow)
{
// More than 5 seconds ago since last update for this subscription. Get new images
GetNewImagesFromInstagram(subscriptionId);
UpdateLastUpdate(subscriptionId, DateTime.UtcNow);
}
}
This won't work very well if I receive two update notifications for the same subscription almost simultaneously, since lastUpdate won't have been updated until after the first request has been processed.
What would be the best way to tackle this problem? I'm thinking of using some kind of cache, but I'm not sure how. Is there some kind of best practices for these kind of things? I'm guessing it's a common problem: "receive notification, do something if something hasn't been done recently..."
Thanks to this answer I went with the following approach, using MemoryCache
[HttpPost]
public void Post(IEnumerable<InstagramUpdate> instagramUpdates)
{
foreach (var instagramUpdate in instagramUpdates)
{
if (WaitingToProcessSubscriptionUpdate(instagramUpdate.Subscription_id))
{
// Ongoing request, do nothing
}
else
{
// Process update
}
}
}
private bool WaitingToProcessSubscriptionUpdate(string subscriptionId)
{
// Check in the in memory cache if this subscription is in queue to be processed. Add it otherwise
var queuedRequest = _cache.AddOrGetExisting(subscriptionId, string.Empty, new CacheItemPolicy
{
// Automatically expire this item after 1 minute (if update failed for example)
AbsoluteExpiration = DateTime.Now.AddMinutes(1)
});
return queuedRequest != null;
}
I am afraid that it is awful idea, but ... Maybe it worth to add lock to this method ? Like
private List<int> subscriptions = new List<int>();
and then
int subscriptinId = 1;//add calculation here
int subscriptionIdIndex = subscriptions.IndexOf(subscriptinId);
lock (subscriptions[subscriptionIdIndex])
{
//your method code
}
Feel free to criticize this approach )

Filter Change Notifications in Active Directory: Create, Delete, Undelete

I am currently using the Change Notifications in Active Directory Domain Services in .NET as described in this blog. This will return all events that happen on an selected object (or in the subtree of that object). I now want to filter the list of events for creation and deletion (and maybe undeletion) events.
I would like to tell the ChangeNotifier class to only observe create-/delete-/undelete-events. The other solution is to receive all events and filter them on my side. I know that in case of the deletion of an object, the atribute list that is returned will contain the attribute isDeleted with the value True. But is there a way to see if the event represents the creation of an object? In my tests the value for usnchanged is always usncreated+1 in case of userobjects and both are equal for OUs, but can this be assured in high-frequency ADs? It is also possible to compare the changed and modified timestamp. And how can I tell if an object has been undeleted?
Just for the record, here is the main part of the code from the blog:
public class ChangeNotifier : IDisposable
{
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request
);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set; }
}
I participated in a design review about five years back on a project that started out using AD change notification. Very similar questions to yours were asked. I can share what I remember, and don't think things have change much since then. We ended up switching to DirSync.
It didn't seem possible to get just creates & deletes from AD change notifications. We found change notification resulted enough events monitoring a large directory that notification processing could bottleneck and fall behind. This API is not designed for scale, but as I recall the performance/latency were not the primary reason we switched.
Yes, the usn relationship for new objects generally holds, although I think there are multi-dc scenarios where you can get usncreated == usnchanged for a new user, but we didn't test that extensively, because...
The important thing for us was that change notification only gives you reliable object creation detection under the unrealistic assumption that your machine is up 100% of the time! In production systems there are always some case where you need to reboot and catch up or re-synchronize, and we switched to DirSync because it has a robust way to handle those scenarios.
In our case it could block email to a new user for an indeterminate time if an object create were missed. That obviously wouldn't be good, we needed to be sure. For AD change notifications, getting that resync right that would have some more work and hard to test. But for DirSync, its more natural, and there's a fast-path resume mechanism that usually avoids resync. For safety I think we triggered a full re-synchronize every day.
DirSync is not as real-time as change notification, but its possible to get ~30-second average latency by issuing the DirSync query once a minute.

Categories