I basically took code from here https://github.com/Azure-Samples/active-directory-dotnet-webapp-webapi-multitenant-openidconnect/blob/master/TodoListWebApp/DAL/EFADALTokenCache.cs but it is not suitable for my application as I don't need the cache per user as given in the example. Accordingly I removed the constructor that accepted User as a parameter since I wanted the cache to be global. I have came up with this version:
public class EFTestTokenCache : TokenCache
{
private TestEntities _TestEntities = new TestEntities();
private TestTokenCache _cache;
public EFTestTokenCache()
{
this.AfterAccess = AfterAccessNotification;
this.BeforeAccess = BeforeAccessNotification;
this.BeforeWrite = BeforeWriteNotification;
}
// clean up the DB
public override void Clear()
{
base.Clear();
foreach (var cacheEntry in _TestEntities.TestTokenCaches)
_TestEntities.TestTokenCaches.Remove(cacheEntry);
_TestEntities.SaveChanges();
}
// Notification raised before ADAL accesses the cache.
// This is your chance to update the in-memory copy from the DB, if the in-memory version is stale
void BeforeAccessNotification(TokenCacheNotificationArgs args)
{
if (_cache == null)
{
// first time access
_cache = _TestEntities.TestTokenCaches.FirstOrDefault(c => c.webUserUniqueId == args.DisplayableId);
}
else
{ // retrieve last write from the DB
var status = from e in _TestEntities.TestTokenCaches
where (e.webUserUniqueId == args.DisplayableId)
select new
{
LastWrite = e.LastWrite
};
// if the in-memory copy is older than the persistent copy
if (status.First().LastWrite > _cache.LastWrite)
//// read from from storage, update in-memory copy
{
_cache = _TestEntities.TestTokenCaches.FirstOrDefault(c => c.webUserUniqueId == args.DisplayableId);
}
}
this.Deserialize((_cache == null) ? null : _cache.cacheBits);
}
// Notification raised after ADAL accessed the cache.
// If the HasStateChanged flag is set, ADAL changed the content of the cache
void AfterAccessNotification(TokenCacheNotificationArgs args)
{
// if state changed
if (this.HasStateChanged)
{
if (_cache != null)
{
_cache.cacheBits = this.Serialize();
_cache.LastWrite = DateTime.Now;
}
else
{
_cache = new TestTokenCache
{
webUserUniqueId = args.DisplayableId,
cacheBits = this.Serialize(),
LastWrite = DateTime.Now
};
}
// update the DB and the lastwrite
_TestEntities.Entry(_cache).State = _cache.EntryId == 0 ? EntityState.Added : EntityState.Modified;
_TestEntities.SaveChanges();
this.HasStateChanged = false;
}
}
void BeforeWriteNotification(TokenCacheNotificationArgs args)
{
// if you want to ensure that no concurrent write take place, use this notification to place a lock on the entry
}
}
Do you think this would work fine as a global cache or is it buggy and always has to be user based as given in the example?
Another query is why is the database cleared in Clear(). Does this mean whenever application pool shuts down or so my database would be cleared? That should not happen though.
Any help is appreciated.
If you are trying to implement a global token cache irrespective of the user then I see an issue with your code as code is looking for any existing cache per the sign in user
as code is using webUserUniqueId to filter
_TestEntities.TestTokenCaches.FirstOrDefault(c => c.webUserUniqueId == args.DisplayableId);
In the correct sample code, every user has a set of tokens that are saved in a DB (or as collection), so that when they sign in to the web app they can directly perform their web API calls without having to re-authenticate/repeat consent.
I am not sure of the purpose why you want to do this but in my opinion if you are implementing a custom token cache for a web it would be good to provide the desirable level of isolation between tokens for different users signing in.
Also, Clear() method clears the cache by deleting all the items in db but this method has not been called in the GitHub sample and you need to add a call to authContext.TokenCache.clear() from SignOut() method of AccountController to clear the cache on user signout.
Related
I have problem in when user post the data. Some times the post run so fast and this make problem in my website.
The user want to register a form about 100$ and have 120$ balance.
When the post (save) button pressed sometimes two post come to server very fast like:
2018-01-31 19:34:43.660 Register Form 5760$
2018-01-31 19:34:43.663 Register Form 5760$
Therefore my client balance become negative.
I use If in my code to check balance but the code run many fast and I think both if happen together and I missed them.
Therefore I made Lock Controll class to avoid concurrency per user but not work well.
I made global Action Filter to control the users this is my code:
public void OnActionExecuting(ActionExecutingContext context)
{
try
{
var controller = (Controller)context.Controller;
if (controller.User.Identity.IsAuthenticated)
{
bool jobDone = false;
int delay = 0;
int counter = 0;
do
{
delay = LockControllers.IsRequested(controller.User.Identity.Name);
if (delay == 0)
{
LockControllers.AddUser(controller.User.Identity.Name);
jobDone = true;
}
else
{
counter++;
System.Threading.Thread.Sleep(delay);
}
if (counter >= 10000)
{
context.HttpContext.Response.StatusCode = 400;
jobDone = true;
context.Result = new ContentResult()
{
Content = "Attack Detected"
};
}
} while (!jobDone);
}
}
catch (System.Exception)
{
}
}
public void OnActionExecuted(ActionExecutedContext context)
{
try
{
var controller = (Controller)context.Controller;
if (controller.User.Identity.IsAuthenticated)
{
LockControllers.RemoveUser(controller.User.Identity.Name);
}
}
catch (System.Exception)
{
}
}
I made list static list of user and sleep their thread until previous task happen.
Is there any better way to manage this problem?
So the original question has been edited so this answer is invalid.
so the issue isn't that the code runs too fast. Fast is always good :) The issue is that the account is going into negative funds. If the client decides to post a form twice that is the clients fault. It maybe that you only want the client to pay only once which is an other problem.
So for the first problem, I would recommend a using transactions (https://en.wikipedia.org/wiki/Database_transaction) to lock your table. Which means that the add update/add a change (or set of changes) and you force other calls to that table to wait until those operations have been done. You can always begin your transaction and check that the account has the correct amount of funds.
If the case is that they are only ever meant to pay once then.. then have a separate table that records if the user has payed (again within a transaction), before processing the update/add.
http://www.entityframeworktutorial.net/entityframework6/transaction-in-entity-framework.aspx
(Edit: fixing link)
You have a few options here
You implement ETag functionality in your app which you can use for optimistic concurrency. This works well, when you are working with records, i.e. you have a database with a data record, return that to the user and then the user changes it.
You could add an required field with a guid to your view model which you pass to your app and add it to in memory cache and check it on each request.
public class RegisterViewModel
{
[Required]
public Guid Id { get; set; }
/* other properties here */
...
}
and then use IMemoryCache or IDistributedMemoryCache (see ASP.NET Core Docs) to put this Id into the memory cache and validate it on request
public Task<IActioNResult> Register(RegisterViewModel register)
{
if(!ModelState.IsValid)
return BadRequest(ModelState);
var userId = ...; /* get userId */
if(_cache.TryGetValue($"Registration-{userId}", register.Id))
{
return BadRequest(new { ErrorMessage = "Command already recieved by this user" });
}
// Set cache options.
var cacheEntryOptions = new MemoryCacheEntryOptions()
// Keep in cache for 5 minutes, reset time if accessed.
.SetSlidingExpiration(TimeSpan.FromMinutes(5));
// when we're here, the command wasn't executed before, so we save the key in the cache
_cache.Set($"Registration-{userId}", register.Id, cacheEntryOptions );
// call your service here to process it
registrationService.Register(...);
}
When the second request arrives, the value will already be in the (distributed) memory cache and the operation will fail.
If the caller do not sets the Id, validation will fail.
Of course all that Jonathan Hickey listed in his answer below applies to, you should always validate that there is enough balance and use EF-Cores optimistic or pessimistic concurrency
// constructor
public ADALTokenCache(string user)
{
// associate the cache to the current user of the web app
User = user;
this.AfterAccess = AfterAccessNotification;
this.BeforeAccess = BeforeAccessNotification;
this.BeforeWrite = BeforeWriteNotification;
// look up the entry in the DB
Cache = db.UserTokenCacheList.FirstOrDefault(c => c.webUserUniqueId == User);
// place the entry in memory
this.Deserialize((Cache == null) ? null : Cache.cacheBits);
}
// clean up the DB
public override void Clear()
{
base.Clear();
foreach (var cacheEntry in db.UserTokenCacheList)
db.UserTokenCacheList.Remove(cacheEntry);
db.SaveChanges();
}
// Notification raised before ADAL accesses the cache.
// This is your chance to update the in-memory copy from the DB, if the in-memory version is stale
void BeforeAccessNotification(TokenCacheNotificationArgs args)
{
if (Cache == null)
{
// first time access
Cache = db.UserTokenCacheList.FirstOrDefault(c => c.webUserUniqueId == User);
}
else
{ // retrieve last write from the DB
var status = from e in db.UserTokenCacheList
where (e.webUserUniqueId == User)
select new
{
LastWrite = e.LastWrite
};
// if the in-memory copy is older than the persistent copy
if (status.First().LastWrite > Cache.LastWrite)
//// read from from storage, update in-memory copy
{
Cache = db.UserTokenCacheList.FirstOrDefault(c => c.webUserUniqueId == User);
}
}
this.Deserialize((Cache == null) ? null : Cache.cacheBits);
}
// Notification raised after ADAL accessed the cache.
// If the HasStateChanged flag is set, ADAL changed the content of the cache
void AfterAccessNotification(TokenCacheNotificationArgs args)
{
// if state changed
if (this.HasStateChanged)
{
Cache = new UserTokenCache
{
webUserUniqueId = User,
cacheBits = this.Serialize(),
LastWrite = DateTime.Now
};
//// update the DB and the lastwrite
db.Entry(Cache).State = Cache.UserTokenCacheId == 0 ? EntityState.Added : EntityState.Modified;
db.SaveChanges();
this.HasStateChanged = true;
}
}
void BeforeWriteNotification(TokenCacheNotificationArgs args)
{
// if you want to ensure that no concurrent write take place, use this notification to place a lock on the entry
}
This is my code for manage AdalToken Cache
I got the following exception
Failed to acquire token silently. Call method AcquireToken
got this exception in following code near to var authResultDisc
//SharePoint connection to get List Items
DiscoveryClient discClient = new DiscoveryClient(SettingsHelper.DiscoveryServiceEndpointUri,
async () =>
{
var authResultDisc = await authContext.AcquireTokenSilentAsync(SettingsHelper.DiscoveryServiceResourceId, new ClientCredential(SettingsHelper.ClientId, SettingsHelper.AppKey), new UserIdentifier(userObjectId, UserIdentifierType.UniqueId));
return authResultDisc.AccessToken;
});
var dcr = await discClient.DiscoverCapabilityAsync("RootSite");
my thinking on this is that, it dose not clearing the database entries.
Can any one help me to solve this error.
This is an expected exception if the access token is not able to find in the cache and refresh the access token failed. Please check whether whether you acquire the access token before you call this function.
I have a requirement where we need a plugin to retrieve a session id from an external system and cache it for a certain time. I use a field on the entity to test if the session is actually being cached. When I refresh the CRM form a couple of times, from the output, it appears there are four versions (at any time consistently) of the same key. I have tried clearing the cache and testing again, but still the same results.
Any help appreciated, thanks in advance.
Output on each refresh of the page:
20170511_125342:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125358:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125410:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125342:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125437:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125358:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125358:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125437:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
To accomplish this, I have implemented the following code:
public class SessionPlugin : IPlugin
{
public static readonly ObjectCache Cache = MemoryCache.Default;
private static readonly string _sessionField = "new_sessionid";
#endregion
public void Execute(IServiceProvider serviceProvider)
{
var context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
try
{
if (context.MessageName.ToLower() != "retrieve" && context.Stage != 40)
return;
var userId = context.InitiatingUserId.ToString();
// Use the userid as key for the cache
var sessionId = CacheSessionId(userId, GetSessionId(userId));
sessionId = $"{sessionId}:{Cache.Select(kvp => kvp.Key == userId).ToList().Count}:{userId}";
// Assign session id to entity
var entity = (Entity)context.OutputParameters["BusinessEntity"];
if (entity.Contains(_sessionField))
entity[_sessionField] = sessionId;
else
entity.Attributes.Add(new KeyValuePair<string, object>(_sessionField, sessionId));
}
catch (Exception e)
{
throw new InvalidPluginExecutionException(e.Message);
}
}
private string CacheSessionId(string key, string sessionId)
{
// If value is in cache, return it
if (Cache.Contains(key))
return Cache.Get(key).ToString();
var cacheItemPolicy = new CacheItemPolicy()
{
AbsoluteExpiration = ObjectCache.InfiniteAbsoluteExpiration,
Priority = CacheItemPriority.Default
};
Cache.Add(key, sessionId, cacheItemPolicy);
return sessionId;
}
private string GetSessionId(string user)
{
// this will be replaced with the actual call to the external service for the session id
return DateTime.Now.ToString("yyyyMMdd_hhmmss");
}
}
This has been greatly explained by Daryl here: https://stackoverflow.com/a/35643860/7708157
Basically you are not having one MemoryCache instance per whole CRM system, your code simply proves that there are multiple app domains for every plugin, so even static variables stored in such plugin can have multiple values, which you cannot rely on. There is no documentation on MSDN that would explain how the sanboxing works (especially app domains in this case), but certainly using static variables is not a good idea.Of course if you are dealing with online, you cannot be sure if there is only single front-end server or many of them (which will also result in such behaviour)
Class level variables should be limited to configuration information. Using a class level variable as you are doing is not supported. In CRM Online, because of multiple web front ends, a specific request may be executed on a different server by a different instance of the plugin class than another request. Overall, assume CRM is stateless and that unless persisted and retrieved nothing should be assumed to be continuous between plugin executions.
Per the SDK:
The plug-in's Execute method should be written to be stateless because
the constructor is not called for every invocation of the plug-in.
Also, multiple system threads could execute the plug-in at the same
time. All per invocation state information is stored in the context,
so you should not use global variables or attempt to store any data in
member variables for use during the next plug-in invocation unless
that data was obtained from the configuration parameter provided to
the constructor.
Reference: https://msdn.microsoft.com/en-us/library/gg328263.aspx
There are a great number of articles available regarding thread safe caching, here's an example:
private static object _lock = new object();
public void CacheData()
{
SPListItemCollection oListItems;
oListItems = (SPListItemCollection)Cache["ListItemCacheName"];
if(oListItems == null)
{
lock (_lock)
{
// Ensure that the data was not loaded by a concurrent thread
// while waiting for lock.
oListItems = (SPListItemCollection)Cache[“ListItemCacheName”];
if (oListItems == null)
{
oListItems = DoQueryToReturnItems();
Cache.Add("ListItemCacheName", oListItems, ..);
}
}
}
}
However, this example depends on the request for the cache also rebuilding the cache.
I'm looking for a solution where the request and rebuild are separate. Here's the scenario.
I have a web service that I want to monitor for certain types of error. If an error occurs, I create an monitor object and cache - it is updatable and is locked accordingly during update. Alls well so far.
Elsewhere, I check for the existence of the cached object, and the data it contains. This would work straight out of the box except for one particular scenario.
If the cache object is being updated - say a status change, I would like to wait and get the latest info rather than the current info, which if returned, would be out of date. So for my fetch code, I need to check if the object is currently being created/updating, and if so wait, then retry.
As I pointed out, there are many examples of cache locking patterns but I can't seem to find one that for this scenario. Any ideas as to how to go about this would be appreciated?
You can try the following code using two locks. Write lock in the setter is quite simple and protects cache from being written by more than one threads. The getter use a simple double-check lock.
Now, the trick is in Refresh() method, which uses the same lock as the getter. The method uses the lock and in the first step removes list from the cache. It will trigger any getter to fail the first null check and wait for the lock. The method in the meantime gets items, sets cache again and releases the lock.
When it comes back to the getter, it reads the cache again and now it contains the list.
public class CacheData
{
private static object _readLock = new object();
private static object _writeLock = new object();
public SPListItemCollection ListItem
{
get
{
var oListItems = (SPListItemCollection) Cache["ListItemCacheName"];
if (oListItems == null)
{
lock (_readLock)
{
oListItems = (SPListItemCollection)Cache["ListItemCacheName"];
if (oListItems == null)
{
oListItems = DoQueryToReturnItems();
Cache.Add("ListItemCacheName", oListItems, ..);
}
}
}
return oListItems;
}
set
{
lock (_writeLock)
{
Cache.Add("ListItemCacheName", value, ..);
}
}
}
public void Refresh()
{
lock (_readLock)
{
Cache.Remove("ListItemCacheName");
var oListItems = DoQueryToReturnItems();
ListItem = oListItems;
}
}
}
You can make the method and property static, if you do not need CacheData instance.
I would appreciate some pointers regarding data access/control in a MVC based multi tenant site:
Is there a better/more secure/elegant way to make sure that in a multi tenant site the user can handle only its own data.
There are number of tenants using same app: firstTenant.myapp.com, secondTenant.myapp.com...
//
// GET: /Customer/
// show this tenant's customer info only
public ViewResult Index()
{
//get TenantID from on server cache
int TenantID = Convert.ToInt16( new AppSettings()["TenantID"]);
return View(context.Customers.ToList().Where(c => c.TenantID == TenantID));
}
If a user logs in for the first time and there is no server side cache for this tenant/user- AppSettings checks in db and stores TenantID in the cache.
Each table in database contains the field TenantID and is used to limit access to data only to appropriate Tenant.
So, to come to the point, instead of checking in each action in each controller if data belong to current tenant, can I do something more 'productive'?
Example:
When firstTenant admin tries editing some info for user 4, url has:
http://firstTenant.myapp.com/User/Edit/4
Let's say that user with ID 2 belongs to secondTenant. Admin from firstTenant puts
http://firstTenant.myapp.com/User/Edit/2 in url, and tries getting info which is not owned by his company.
In order to prevent this in the controller I check if the info being edited is actually owned by current tenant.
//
// GET: /User/Edit/
public ActionResult Edit(int id)
{
//set tennant ID
int TenanatID = Convert.ToInt32(new AppSettings()["TenantID"]);
//check if asked info is actually owned by this tennant
User user = context.Userss.Where(u => u.TenantID == TenantID).SingleOrDefault(u => u.UserID == id);
//in case this tenant doesn't have this user ID, ie.e returned User == null
//something is wrong, so handle bad request
//
return View(user);
}
Basically this sort of setneeds to be placed in every controller where there is an access to any data. Is there (and how) a better way to handle this? (Filters, attributes...)
I choose to use action filters to do this. It may not be the most elegant solution, but it is the cleanest of the solutions we've tried so far.
I keep the tenant (in our case, it's a team) in the URL like this: https://myapp.com/{team}/tasks/details/1234
I use custom bindings to map {team} into an actual Team object so my action methods look like this:
[AjaxAuthorize, TeamMember, TeamTask("id")]
public ActionResult Details(Team team, Task id)
The TeamMember attribute verifies that the currently logged in user actually belongs to the team. It also verifies that the team actually exists:
public class TeamMemberAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
base.OnActionExecuting(filterContext);
var httpContext = filterContext.RequestContext.HttpContext;
Team team = filterContext.ActionParameters["team"] as Team;
long userId = long.Parse(httpContext.User.Identity.Name);
if (team == null || team.Members.Where(m => m.Id == userId).Count() == 0)
{
httpContext.Response.StatusCode = 403;
ViewResult insufficientPermssions = new ViewResult();
insufficientPermssions.ViewName = "InsufficientPermissions";
filterContext.Result = insufficientPermssions;
}
}
}
Similarly, the TeamTask attribute ensures that the task in question actually belongs to the team.
Since my app is using subdomains (sub1.app.com, sub2.app.com.....) I basically choose to:
a) use something like the following code to cache info about tenants and
b) to call an action filter on each controller as suggested by Ragesh & Doc:
(Following code is from the blog on : http://www.developer.com/design/article.php/10925_3801931_2/Introduction-to-Multi-Tenant-Architecture.htm )
// <summary>
// This class is used to manage the Cached AppSettings
// from the Database
// </summary>
public class AppSettings
{
// <summary>
// This indexer is used to retrieve AppSettings from Memory
// </summary>
public string this[string Name]
{
get
{
//See if we have an AppSettings Cache Item
if (HttpContext.Current.Cache["AppSettings"] == null)
{
int? TenantID = 0;
//Look up the URL and get the Tenant Info
using (ApplContext dc =
new ApplContext())
{
Site result =
dc.Sites
.Where(a => a.Host ==
HttpContext.Current.Request.Url.
Host.ToLower())
.FirstOrDefault();
if (result != null)
{
TenantID = result.SiteID;
}
}
AppSettings.LoadAppSettings(TenantID);
}
Hashtable ht =
(Hashtable)HttpContext.Current.Cache["AppSettings"];
if (ht.ContainsKey(Name))
{
return ht[Name].ToString();
}
else
{
return string.Empty;
}
}
}
// <summary>
// This Method is used to load the app settings from the
// database into memory
// </summary>
public static void LoadAppSettings(int? TenantID)
{
Hashtable ht = new Hashtable();
//Now Load the AppSettings
using (ShoelaceContext dc =
new ShoelaceContext())
{
//settings are turned off
// no specific settings per user needed currently
//var results = dc.AppSettings.Where(a =>
// a.in_Tenant_Id == TenantID);
//foreach (var appSetting in results)
//{
// ht.Add(appSetting.vc_Name, appSetting.vc_Value);
//}
ht.Add("TenantID", TenantID);
}
//Add it into Cache (Have the Cache Expire after 1 Hour)
HttpContext.Current.Cache.Add("AppSettings",
ht, null,
System.Web.Caching.Cache.NoAbsoluteExpiration,
new TimeSpan(1, 0, 0),
System.Web.Caching.CacheItemPriority.NotRemovable, null);
}
}
If you want to execute common code like this on every Action in the Controller, you can do this:
protected override void OnActionExecuting(ActionExecutingContext filterContext)
{
base.OnActionExecuting(filterContext);
// do your magic here, you can check the session and/or call the database
}
We have developed a multi tenant application using ASP.NET MVC as well and including the tenant ID in every query is a completely acceptable and really necessary thing to do. I'm not sure where you are hosting your application but if you can use SQL Azure they have a new product called Federations that allows you to easily manage multi tenant data. One nice feature is that when you open the connection you can specify the tenant ID and all queries executed thereafter will only effect that tenants data. It is essentially just including their tenant ID in every request for you so you don't have to do it manually. (Note that federating data is not a new concept, Microsoft just released their own implementation of it recently)