Handle multiple similar requests in webapi - c#

In my WebApi controller I have the following (pseudo) code that receives update notifications from Instagrams real-time API:
[HttpPost]
public void Post(InstagramUpdate instagramUpdate)
{
var subscriptionId = instagramUpdate.SubscriptionId;
var lastUpdate = GetLastUpdate(subscriptionId);
// To avoid breaking my Instagram request limit, do not fetch new images too often.
if (lastUpdate.AddSeconds(5) < DateTime.UtcNow)
{
// More than 5 seconds ago since last update for this subscription. Get new images
GetNewImagesFromInstagram(subscriptionId);
UpdateLastUpdate(subscriptionId, DateTime.UtcNow);
}
}
This won't work very well if I receive two update notifications for the same subscription almost simultaneously, since lastUpdate won't have been updated until after the first request has been processed.
What would be the best way to tackle this problem? I'm thinking of using some kind of cache, but I'm not sure how. Is there some kind of best practices for these kind of things? I'm guessing it's a common problem: "receive notification, do something if something hasn't been done recently..."

Thanks to this answer I went with the following approach, using MemoryCache
[HttpPost]
public void Post(IEnumerable<InstagramUpdate> instagramUpdates)
{
foreach (var instagramUpdate in instagramUpdates)
{
if (WaitingToProcessSubscriptionUpdate(instagramUpdate.Subscription_id))
{
// Ongoing request, do nothing
}
else
{
// Process update
}
}
}
private bool WaitingToProcessSubscriptionUpdate(string subscriptionId)
{
// Check in the in memory cache if this subscription is in queue to be processed. Add it otherwise
var queuedRequest = _cache.AddOrGetExisting(subscriptionId, string.Empty, new CacheItemPolicy
{
// Automatically expire this item after 1 minute (if update failed for example)
AbsoluteExpiration = DateTime.Now.AddMinutes(1)
});
return queuedRequest != null;
}

I am afraid that it is awful idea, but ... Maybe it worth to add lock to this method ? Like
private List<int> subscriptions = new List<int>();
and then
int subscriptinId = 1;//add calculation here
int subscriptionIdIndex = subscriptions.IndexOf(subscriptinId);
lock (subscriptions[subscriptionIdIndex])
{
//your method code
}
Feel free to criticize this approach )

Related

EF 6 Connection pool and stored query strings RAM leak

I have 2 problems.
First one is that Connection pool and (OracleConnectionImpl) is growing steadily over time, till it reaches out of memory exception. As you can see on the image, it has 90 MB over 3-4 hours of running.
I am using short lived contexts everywhere, but it keeps on growing and never clears itself. Is there any way for me to clear it?
Second one is that EF stores too much duplicates of string queries over time.
It mostly stores those which come from .Reload() function, because it is not paramterized, it hardcode the ID into the query.
And then there are strings like "ID" which is somewhere cached 2947x.
Is there any way for the .Reload() function to make it parametrized, or to clear all of those stored strings?
This app is refreshing warehouse jobs and palletes every few seconds, to keep it in sync for all machines and I am not aware of better way than .Reload() because of WPF bindings.
Edit 1
I have simple helper function for reloading many intities at once, even one extension. It doesnt matter that it is passed as object, because the same problem remains even with the last example.
public static void ReloadEntities(bool dispatch, params IEnumerable<object>[] entities)
{
using (var ctx = new eWMSEntities())
{
if (dispatch)
{
Application.Current.Dispatcher.Invoke(DispatcherPriority.Background, (SendOrPostCallback)delegate
{
entities.SelectMany(x => x.Select(s => s)).ToList().ForEach(entity =>
{
ctx.Set(entity.GetType()).Attach(entity);
ctx.Entry(entity).Reload();
ctx.Entry(entity).State = EntityState.Detached;
});
}, null);
}
else
{
entities.SelectMany(x => x.Select(s => s)).ToList().ForEach(entity =>
{
ctx.Set(entity.GetType()).Attach(entity);
ctx.Entry(entity).Reload();
ctx.Entry(entity).State = EntityState.Detached;
});
}
ctx.Dispose();
}
}
public static void ReloadEntity(this object entity, bool dispatch)
{
using (var ctx = new eWMSEntities())
{
ctx.Set(entity.GetType()).Attach(entity);
if (dispatch)
{
Application.Current.Dispatcher.Invoke(DispatcherPriority.Background, (SendOrPostCallback)delegate
{
ctx.Entry(entity).Reload();
}, null);
}
else
{
ctx.Entry(entity).Reload();
}
ctx.Entry(entity).State = EntityState.Detached;
ctx.Dispose();
}
}
while (true && JobLines.Contains(line))
{
using (var ctx = new eWMSEntities())
{
ctx.T_JOB_LINES.attach(line);
ctx.entry(line).Reload();
}
await Task.Delay(3000);
}
This is snapshot after 3 days of running
I'm not recommend but if there are no any other solution...
Maybe you can try
GC.Collect();
Let system force to collect something didn't important in your memory.
Use the "Unit of work" pattern - it will solve many problems. I did not find for WPF, but found for ASP.NET MVC

Avoid fast post on webapi c#

I have problem in when user post the data. Some times the post run so fast and this make problem in my website.
The user want to register a form about 100$ and have 120$ balance.
When the post (save) button pressed sometimes two post come to server very fast like:
2018-01-31 19:34:43.660 Register Form 5760$
2018-01-31 19:34:43.663 Register Form 5760$
Therefore my client balance become negative.
I use If in my code to check balance but the code run many fast and I think both if happen together and I missed them.
Therefore I made Lock Controll class to avoid concurrency per user but not work well.
I made global Action Filter to control the users this is my code:
public void OnActionExecuting(ActionExecutingContext context)
{
try
{
var controller = (Controller)context.Controller;
if (controller.User.Identity.IsAuthenticated)
{
bool jobDone = false;
int delay = 0;
int counter = 0;
do
{
delay = LockControllers.IsRequested(controller.User.Identity.Name);
if (delay == 0)
{
LockControllers.AddUser(controller.User.Identity.Name);
jobDone = true;
}
else
{
counter++;
System.Threading.Thread.Sleep(delay);
}
if (counter >= 10000)
{
context.HttpContext.Response.StatusCode = 400;
jobDone = true;
context.Result = new ContentResult()
{
Content = "Attack Detected"
};
}
} while (!jobDone);
}
}
catch (System.Exception)
{
}
}
public void OnActionExecuted(ActionExecutedContext context)
{
try
{
var controller = (Controller)context.Controller;
if (controller.User.Identity.IsAuthenticated)
{
LockControllers.RemoveUser(controller.User.Identity.Name);
}
}
catch (System.Exception)
{
}
}
I made list static list of user and sleep their thread until previous task happen.
Is there any better way to manage this problem?
So the original question has been edited so this answer is invalid.
so the issue isn't that the code runs too fast. Fast is always good :) The issue is that the account is going into negative funds. If the client decides to post a form twice that is the clients fault. It maybe that you only want the client to pay only once which is an other problem.
So for the first problem, I would recommend a using transactions (https://en.wikipedia.org/wiki/Database_transaction) to lock your table. Which means that the add update/add a change (or set of changes) and you force other calls to that table to wait until those operations have been done. You can always begin your transaction and check that the account has the correct amount of funds.
If the case is that they are only ever meant to pay once then.. then have a separate table that records if the user has payed (again within a transaction), before processing the update/add.
http://www.entityframeworktutorial.net/entityframework6/transaction-in-entity-framework.aspx
(Edit: fixing link)
You have a few options here
You implement ETag functionality in your app which you can use for optimistic concurrency. This works well, when you are working with records, i.e. you have a database with a data record, return that to the user and then the user changes it.
You could add an required field with a guid to your view model which you pass to your app and add it to in memory cache and check it on each request.
public class RegisterViewModel
{
[Required]
public Guid Id { get; set; }
/* other properties here */
...
}
and then use IMemoryCache or IDistributedMemoryCache (see ASP.NET Core Docs) to put this Id into the memory cache and validate it on request
public Task<IActioNResult> Register(RegisterViewModel register)
{
if(!ModelState.IsValid)
return BadRequest(ModelState);
var userId = ...; /* get userId */
if(_cache.TryGetValue($"Registration-{userId}", register.Id))
{
return BadRequest(new { ErrorMessage = "Command already recieved by this user" });
}
// Set cache options.
var cacheEntryOptions = new MemoryCacheEntryOptions()
// Keep in cache for 5 minutes, reset time if accessed.
.SetSlidingExpiration(TimeSpan.FromMinutes(5));
// when we're here, the command wasn't executed before, so we save the key in the cache
_cache.Set($"Registration-{userId}", register.Id, cacheEntryOptions );
// call your service here to process it
registrationService.Register(...);
}
When the second request arrives, the value will already be in the (distributed) memory cache and the operation will fail.
If the caller do not sets the Id, validation will fail.
Of course all that Jonathan Hickey listed in his answer below applies to, you should always validate that there is enough balance and use EF-Cores optimistic or pessimistic concurrency

Filter Change Notifications in Active Directory: Create, Delete, Undelete

I am currently using the Change Notifications in Active Directory Domain Services in .NET as described in this blog. This will return all events that happen on an selected object (or in the subtree of that object). I now want to filter the list of events for creation and deletion (and maybe undeletion) events.
I would like to tell the ChangeNotifier class to only observe create-/delete-/undelete-events. The other solution is to receive all events and filter them on my side. I know that in case of the deletion of an object, the atribute list that is returned will contain the attribute isDeleted with the value True. But is there a way to see if the event represents the creation of an object? In my tests the value for usnchanged is always usncreated+1 in case of userobjects and both are equal for OUs, but can this be assured in high-frequency ADs? It is also possible to compare the changed and modified timestamp. And how can I tell if an object has been undeleted?
Just for the record, here is the main part of the code from the blog:
public class ChangeNotifier : IDisposable
{
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request
);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set; }
}
I participated in a design review about five years back on a project that started out using AD change notification. Very similar questions to yours were asked. I can share what I remember, and don't think things have change much since then. We ended up switching to DirSync.
It didn't seem possible to get just creates & deletes from AD change notifications. We found change notification resulted enough events monitoring a large directory that notification processing could bottleneck and fall behind. This API is not designed for scale, but as I recall the performance/latency were not the primary reason we switched.
Yes, the usn relationship for new objects generally holds, although I think there are multi-dc scenarios where you can get usncreated == usnchanged for a new user, but we didn't test that extensively, because...
The important thing for us was that change notification only gives you reliable object creation detection under the unrealistic assumption that your machine is up 100% of the time! In production systems there are always some case where you need to reboot and catch up or re-synchronize, and we switched to DirSync because it has a robust way to handle those scenarios.
In our case it could block email to a new user for an indeterminate time if an object create were missed. That obviously wouldn't be good, we needed to be sure. For AD change notifications, getting that resync right that would have some more work and hard to test. But for DirSync, its more natural, and there's a fast-path resume mechanism that usually avoids resync. For safety I think we triggered a full re-synchronize every day.
DirSync is not as real-time as change notification, but its possible to get ~30-second average latency by issuing the DirSync query once a minute.

How to seed an observable from a database

I'm trying to expose an observable sequence that gives observers all existing records in a database table plus any future items. For the sake of argument, lets say it's log entries. Therefore, I'd have something like this:
public class LogService
{
private readonly Subject<LogEntry> entries;
public LogService()
{
this.entries = new Subject<LogEntry>();
this.entries
.Buffer(...)
.Subscribe(async x => WriteLogEntriesToDatabaseAsync(x));
}
public IObservable<LogEntry> Entries
{
get { return this.entries; }
}
public IObservable<LogEntry> AllLogEntries
{
get
{
// how the heck?
}
}
public void Log(string message)
{
this.entries.OnNext(new LogEntry(message));
}
private async Task<IEnumerable<LogEntry>> GetLogEntriesAsync()
{
// reads existing entries from DB table and returns them
}
private async Task WriteLogEntriesToDatabaseAsync(IList<LogEntry> entries)
{
// writes entries to the database
}
}
My initial thought for the implementation of AllLogEntries was something like this:
return Observable.Create<LogEntry>(
async observer =>
{
var existingEntries = await this.GetLogEntriesAsync();
foreach (var existingEntry in existingEntries)
{
observer.OnNext(existingEntry);
}
return this.entries.Subscribe(observer);
});
But the problem with this is that there could log entries that have been buffered and not yet written to the database. Hence, those entries will be missed because they are not in the database and have already passed through the entries observable.
My next thought was to separate the buffered entries from the non-buffered and use the buffered when implementing AllLogEntries:
return Observable.Create<LogEntry>(
async observer =>
{
var existingEntries = await this.GetLogEntriesAsync();
foreach (var existingEntry in existingEntries)
{
observer.OnNext(existingEntry);
}
return this.bufferedEntries
.SelectMany(x => x)
.Subscribe(observer);
});
There are two problems with this:
It means clients of AllLogEntries also have to wait for the buffer timespan to pass before they receive their log entries. I want them to see log entries instantaneously.
There is still a race condition in that log entries could be written to the database between the point at which I finish reading the existing ones and the point at which I return the future entries.
So my question is: how would I actually go about achieving my requirements here with no possibility of race conditions, and avoiding any major performance penalties?
To do this via the client code, you will probably have to implement a solution using polling and then look for differences between calls. Possibly combining a solution with
Observable.Interval() : http://rxwiki.wikidot.com/101samples#toc28 , and
Observable.DistinctUntilChanged()
will give you sufficient solution.
Alternatively, I'd suggest you try to find a solution where the clients are notified when the DB/table is updated. In a web application, you could use something like SignalR to do this.
For example: http://techbrij.com/database-change-notifications-asp-net-signalr-sqldependency
If its not a web-application, a similar update mechanism via sockets may work.
See these links (these came from the accepted answer of SignalR polling database for updates):
http://xsockets.net/api/net-c#snippet61
https://github.com/codeplanner/XSocketsPollingLegacyDB

Pattern for concurrent cache sharing

Ok I was a little unsure on how best name this problem :) But assume this scenarion, you're
going out and fetching some webpage (with various urls) and caching it locally. The cache part is pretty easy to solve even with multiple threads.
However, imagine that one thread starts fetching an url, and a couple of milliseconds later another want to get the same url. Is there any good pattern for making the seconds thread's method wait on the first one to fetch the page , insert it into the cache and return it so you don't have to do multiple requests. With little enough overhead that it's worth doing even for requests that take about 300-700 ms? And without locking requests for other urls
Basically when requests for identical urls comes in tightly after each other I want the second request to "piggyback" the first request
I had some loose idea of having a dictionary where you insert an object with the key as url when you start fetching a page and lock on it. If there's any matching the key already it get's the object, locks on it and then tries to fetch the url for the actual cache.
I'm a little unsure of the particulars however to make it really thread-safe, using ConcurrentDictionary might be one part of it...
Is there any common pattern and solutions for scenarios like this?
Breakdown wrong behavior:
Thread 1: Checks the cache, it doesnt exists so starts fetching the url
Thread 2: Starts fetching the same url since it still doesn't exist in Cache
Thread 1: finished and inserts into the cache, returns the page
Thread 2: Finishes and also inserts into cache (or discards it), returns the page
Breakdown correct behavior:
Thread 1: Checks the cache, it doesnt exists so starts fetching the url
Thread 2: Wants the same url, but sees it's currently being fetched so waits on thread 1
Thread 1: finished and inserts into the cache, returns the page
Thread 2: Notices that thread 1 is finished and returns the page thread 1 it fetched
EDIT
Most solutions sofar seem to misunderstand the problem and only addressing the caching, as I said that isnt the problem, the problem is when doing an external web fetch to make the second fetch that is done before the first one has cached it to use the result from the first rather then doing a second
You could use a ConcurrentDictionary<K,V> and a variant of double-checked locking:
public static string GetUrlContent(string url)
{
object value1 = _cache.GetOrAdd(url, new object());
if (value1 == null) // null check only required if content
return null; // could legitimately be a null string
var urlContent = value1 as string;
if (urlContent != null)
return urlContent; // got the content
// value1 isn't a string which means that it's an object to lock against
lock (value1)
{
object value2 = _cache[url];
// at this point value2 will *either* be the url content
// *or* the object that we already hold a lock against
if (value2 != value1)
return (string)value2; // got the content
urlContent = FetchContentFromTheWeb(url); // todo
_cache[url] = urlContent;
return urlContent;
}
}
private static readonly ConcurrentDictionary<string, object> _cache =
new ConcurrentDictionary<string, object>();
EDIT: My code is quite a bit uglier now, but uses a separate lock per URL. This allows different URLs to be fetched asynchronously, however each URL will only be fetched once.
public class UrlFetcher
{
static Hashtable cache = Hashtable.Synchronized(new Hashtable());
public static String GetCachedUrl(String url)
{
// exactly 1 fetcher is created per URL
InternalFetcher fetcher = (InternalFetcher)cache[url];
if( fetcher == null )
{
lock( cache.SyncRoot )
{
fetcher = (InternalFetcher)cache[url];
if( fetcher == null )
{
fetcher = new InternalFetcher(url);
cache[url] = fetcher;
}
}
}
// blocks all threads requesting the same URL
return fetcher.Contents;
}
/// <summary>Each fetcher locks on itself and is initilized with null contents.
/// The first thread to call fetcher.Contents will cause the fetch to occur, and
/// block until completion.</summary>
private class InternalFetcher
{
private String url;
private String contents;
public InternalFetcher(String url)
{
this.url = url;
this.contents = null;
}
public String Contents
{
get
{
if( contents == null )
{
lock( this ) // "this" is an instance of InternalFetcher...
{
if( contents == null )
{
contents = FetchFromWeb(url);
}
}
}
return contents;
}
}
}
}
Will the Semaphore please stand up! stand up! stand up!
use Semaphore you can easily synchronize your threads with it.
on both cases where
you are trying to load a page that is currently being cached
you are saving cache to a file where a page is loading from it.
in both scenarios you will face troubles.
it is just like writers and readers problem that is a common problem in Operating System Racing Issues. just when a thread wants to rebuild a cache or start caching a page no thread should read from it. if a thread is reading it it should wait until reading finished and replace the cache, no 2 threads should cache same page in to a same file. hence it is possible for all readers to read from a cache at anytime since no writer is writing on it.
you should read some semaphore using samples on msdn, it is very easy to use. just the thread that wants to do something is call the semaphore and if the resource can granted it do the works otherwise sleeps and wait to be woken up when the resource is ready.
Disclaimer: This might be a n00bish answer. Please pardon me, if it is.
I'd recommend using some shared dictionary object with locks to keep a track of the url being currently fetched or have already been fetched.
At every request, check the url against this object.
If an entry for the url is present, check the cache. (this means one of the threads has either fetched it or is currently fetching it)
If its available in the cache, use it, else put the current thread to sleep for a while and check back again. (if not in cache, some thread is still fetching it, so wait while its done)
If the entry is not found in the dictionary object, add the url to it and send the request. Once it obtains a response, add it to cache.
This logic should work, however, you would need to take care of cache expiration and removal of the entry from the dictionary object.
my solution is use atomicBoolean to control access database when cache is timeout or unexist;
at the same moment, only one thread(i call it read-th) can access database, the other threads spin until the read-th return data and write it into cache;
here codes; implement by java;
public class CacheBreakDownDefender<K, R> {
/**
* false = do not write null to cache when get null value from database;
*/
private final boolean writeNullToCache;
/**
* cache different query key
*/
private final ConcurrentHashMap<K, AtomicBoolean> selectingDBTagMap = new ConcurrentHashMap<>();
public static <K, R> CacheBreakDownDefender<K, R> getInstance(Class<K> keyType, Class<R> resultType) {
return Singleton.get(keyType.getName() + resultType.getName(), () -> new CacheBreakDownDefender<>(false));
}
public static <K, R> CacheBreakDownDefender<K, R> getInstance(Class<K> keyType, Class<R> resultType, boolean writeNullToCache) {
return Singleton.get(keyType.getName() + resultType.getName(), () -> new CacheBreakDownDefender<>(writeNullToCache));
}
private CacheBreakDownDefender(boolean writeNullToCache) {
this.writeNullToCache = writeNullToCache;
}
public R readFromCache(K key, Function<K, ? extends R> getFromCache, Function<K, ? extends R> getFromDB, BiConsumer<K, R> writeCache) throws InterruptedException {
R result = getFromCache.apply(key);
if (result == null) {
final AtomicBoolean selectingDB = selectingDBTagMap.computeIfAbsent(key, x -> new AtomicBoolean(false));
if (selectingDB.compareAndSet(false, true)) {
try {
result = getFromDB.apply(key);
if (result != null || writeNullToCache) {
writeCache.accept(key, result);
}
} finally {
selectingDB.getAndSet(false);
selectingDBTagMap.remove(key);
}
} else {
while (selectingDB.get()) {
TimeUnit.MILLISECONDS.sleep(0L);
//do nothing...
}
return getFromCache.apply(key);
}
}
return result;
}
public static void main(String[] args) throws InterruptedException {
Map<String, String> map = new ConcurrentHashMap<>();
CacheBreakDownDefender<String, String> instance = CacheBreakDownDefender.getInstance(String.class, String.class, true);
for (int i = 0; i < 9; i++) {
int finalI = i;
new Thread(() -> {
String kele = null;
try {
if (finalI == 6) {
kele = instance.readFromCache("kele2", map::get, key -> "helloword2", map::put);
} else
kele = instance.readFromCache("kele", map::get, key -> "helloword", map::put);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
log.info("resut= {}", kele);
}).start();
}
TimeUnit.SECONDS.sleep(2L);
}
}
This is not exactly for concurrent caches but for all caches:
"A cache with a bad policy is another name for a memory leak" (Raymond Chen)

Categories