I'm having trouble grasping some of the features of NHibernates caching / database-hit-prevention techniques.
I've created a test case which is supposed to ensure that our web service API properly creates and saves a new object. The test case passes fine when I do not have to serialize through the web service (e.g. directly working with the web service class instead of adding it as a service reference and going up/down through it). However, I receive stale data from NHibernate when I run my test case against the hosted web service.
[Test]
public void CreateInstallTask()
{
int numberOfTasks = TaskDao.GetAll().Count();
TaskDto taskDto = WorkflowServices.CreateInstallTask(OrderID, TaskTemplateID, SiteID, DataCenterID,
DeviceTemplateID, DeviceName, Username);
if (TaskDao.GetAll().Count() == numberOfTasks)
{
string failureReason =
string.Format("Failed to create new Install task with OrderID: {0}", taskDto.OrderID);
throw new Exception(failureReason);
}
}
[WebMethod(Description = "Creates a new install Task.")]
public TaskDto CreateInstallTask(int orderID, int taskTemplateID, int siteID, int dataCenterID,
int deviceTemplateID, string deviceName, string username)
{
try
{
Order order = OrderDao.GetByID(orderID, shouldLock: false);
if (order == null)
throw new Exception(string.Format("Failed to find an order with ID {0}", orderID));
Task task = new Task
{
Order = order,
TaskType = TaskType.Install,
TaskTemplateID = taskTemplateID,
CreateUserID = username,
CreateDateTime = DateTime.Now
};
TaskAction taskAction = new TaskAction(TaskDao, TaskDeviceDao, ActivityDao, task, username);
//Call TaskDto.Create to convert Task into TaskDto for client-side use.
return TaskDto.Create(taskAction.CreateTask());
}
catch (Exception exception)
{
Logger.Error(exception);
throw;
}
}
The GetAll() method is simply a criteria.List() for all rows in a table. The CreateTask method just calls ISession.SaveOrUpdate();
I understand that I have the ability to force reloading data, but I do not understand why I should have to do this.
When I call SaveOrUpdate(entity), that entity should automatically be added to NHibernate's cache, right? Why would TaskDao.GetAll() return stale data?
I am worried about overusing CommitTransaction(). I do not think that I should call CommitTransaction() after every SaveOrUpdate() -- that defeats the purpose of NHibernates caching. But, I do not want stale data for my test cases, either. How can I keep my cache in sync?
You are correct in that you should not commit your transaction after every save but your web service should be creating a new transaction at the start of the web call and committing at the end of the web call.
Web services typically follow the same session per request pattern that web sites typically follow so ensure that your web service infrastructure is creating both a new NHibernate ISession and starting a new transaction with each request. At the end of that request, it should be committing any changes made.
Related
I got a SaaS project that needs the use Hangfire. We already implemented the requirements to identify a tenant.
Architecture
Persistence Layer
Each tenant has it's own database
.NET Core
We already have a service TenantCurrentService which returns the ID of the tenant, from a list of source [hostname, query string, etc]
We already have a DbContextFactory for Entity Framework which return a DB context with the correct connection string for the client
We are currently using ASP.NET Core DI (willing to change if that helps)
Hangfire
Using single storage (eg: Postgresql), no matter the tenant count
Execute the job in an appropriate Container/ServiceCollection, so we retrieve the right database, right settings, etc.
The problem
I'm trying to stamp a TenantId to a job, retrieved from TenantCurrentService (which is a Scoped service).
When the job then gets executed, we need to retrieve the TenantId from the Job and store it in HangfireContext, so then the TenantCurrentService knows the TenantId retrieved from Hangfire. And from there, our application layer will be able to connect to the right database from our DbContextFactory
Current state
Currently, we have been able to store tenantId retrieved from our Service using a IClientFilter.
How can I retrieve my current ASP.NET Core DI ServiceScope from IServerFilter (which is responsible to retrieve the saved Job Parameters), so I can call .GetRequiredService().IdentifyTenant(tenantId)
Is there any good article regarding this matter / or any tips that you guys can provide?
First, you need to be able to set the TenantId in your TenantCurrentService.
Then, you can rely on filters :
client side (where you enqueue jobs)
public class ClientTenantFilter : IClientFilter
{
public void OnCreating(CreatingContext filterContext)
{
if (filterContext == null) throw new ArgumentNullException(nameof(filterContext));
filterContext.SetJobParameter("TenantId", TenantCurrentService.TenantId);
}
}
and server side (where the job is dequeued).
public class ServerTenantFilter : IServerFilter
{
public void OnPerforming(PerformingContext filterContext)
{
if (filterContext == null) throw new ArgumentNullException(nameof(filterContext));
var tenantId = filterContext.GetJobParameter<string>("TenantId");
TenantCurrentService.TenantId = tenantId;
}
}
The server filter can be declared when you configure your server through an IJobFilterProvider:
var options = new BackgroundJobServerOptions
{
Queues = ...,
FilterProvider = new ServerFilterProvider()
};
app.UseHangfireServer(storage, options, ...);
where ServerFilterProvider is :
public class ServerFilterProvider : IJobFilterProvider
{
public IEnumerable<JobFilter> GetFilters(Job job)
{
return new JobFilter[]
{
new JobFilter(new CaptureCultureAttribute(), JobFilterScope.Global, null),
new JobFilter(new ServerTenantFilter (), JobFilterScope.Global, null),
};
}
}
The client filter can be declared when you instantiate a BackgroundJobClient
var client = new BackgroundJobClient(storage, new BackgroundJobFactory(new ClientFilterProvider());
where ClientFilterProvider behaves as ServerFilterProvider, delivering client filter
A difficulty may be to have the TenantCurrentService available in the filters. I guess this should be achievable by injecting factories in the FilterProviders and chain it to the filters.
I hope this will help.
I wrote a library, referenced by numerous applications, that tracks who is online and which application and page they are viewing.
The data is stored, using EF6, in a Sql Server 2008 table which tracks their username (primary key), application, page and timestamp. I only want to store the latest request for each person so each username should only be stored once.
The library code, which is called from the Global.asax of each application looks like this:
public static void Add(ApplicationType application, string username, string pageRequested)
{
using (var db = new CommonDAL()) // EF context
{
var exists = db.ActiveUsers.Find(username);
if (exists != null)
db.ActiveUsers.Remove(exists);
var activeUser = new ActiveUser() { ApplicationID = application.Value(), Username = username, PageRequested = pageRequested, TimeRequested = DateTime.Now };
db.ActiveUsers.Add(activeUser);
db.SaveChanges();
}
}
I'm intermittently getting the error Violation of PRIMARY KEY constraint 'PK_tblActiveUser_Username'. Cannot insert duplicate key in object 'dbo.tblActiveUser'. The duplicate key value is (xxxxxxxx)
What I can only guess is happening is Request A comes in, removes the existing username. Request B (from same user) then comes in, tries to remove the username, sees nothing exists. Request A then adds the username. Request B then tries to add the username. The error frequently seems to be triggered when a web server sends a client a 401 status, which again points to multiple requests within a short period of time triggering this.
I'm having trouble mocking this race condition using unit tests as I haven't done much async programming before, but tried to create async tests with delays to mock multiple simultaneous slow requests. I've tried to use using (var transaction = new TransactionScope()) and using (var transaction = db.Database.BeginTransaction(System.Data.IsolationLevel.ReadCommitted)) to lock the requests so request A can complete before request B begins but can't verify either one fixes the issue as I can't mock the situation reliably.
1) Which is the right way to prevent the exception (Most recent request is the one that ultimately is stored)?
2) Which is the right way to to write a unit test to prove this is working?
Since you only want to store the latest item, you could use a last update wins and avoid the race condition on who can insert first, the database handles the locks and the last to call update (which is the most recent) is what is in the table.
Something like the following should handle any primary key errors if you run into concurrency issues on the edge case that a brand new user has 2 requests at the same time and avoid an "infinite" loop of errors (well until a stack overflow exception any way).
public static void Add(ApplicationType application,
string username,
string pageRequested,
int recursionCount = 0)
{
using (var db = new CommonDAL()) // EF context
{
var exists = db.ActiveUsers.Find(username);
if (exists != null)
{
exists.propa = "someVal";
}
else
{
var activeUser = new ActiveUser
{
ApplicationID = application.Value(),
Username = username,
PageRequested = pageRequested,
TimeRequested = DateTime.Now
};
db.ActiveUsers.Add(activeUser);
}
try
{
db.SaveChanges();
}
catch(<Primary Key Violation>)
{
if(recursionCount < x)
{
Add(application, username, pageRequested, recursionCount++)
}
else
{
throw;
}
}
}
}
As for unit testing this, it will be very hard unless you insert an artificial delay or can force both threads to run at the same time. Sometimes the timing on the race conditions is in the millisecond range depending on the issue. Tasks may not work because they are not guaranteed to run at the same time, you throw them to the background thread pool and they run when they can. Old school threads may work but I don't know how to force it since the time between read and remove & create are most likely in the 5 ms range or less.
I have problem in when user post the data. Some times the post run so fast and this make problem in my website.
The user want to register a form about 100$ and have 120$ balance.
When the post (save) button pressed sometimes two post come to server very fast like:
2018-01-31 19:34:43.660 Register Form 5760$
2018-01-31 19:34:43.663 Register Form 5760$
Therefore my client balance become negative.
I use If in my code to check balance but the code run many fast and I think both if happen together and I missed them.
Therefore I made Lock Controll class to avoid concurrency per user but not work well.
I made global Action Filter to control the users this is my code:
public void OnActionExecuting(ActionExecutingContext context)
{
try
{
var controller = (Controller)context.Controller;
if (controller.User.Identity.IsAuthenticated)
{
bool jobDone = false;
int delay = 0;
int counter = 0;
do
{
delay = LockControllers.IsRequested(controller.User.Identity.Name);
if (delay == 0)
{
LockControllers.AddUser(controller.User.Identity.Name);
jobDone = true;
}
else
{
counter++;
System.Threading.Thread.Sleep(delay);
}
if (counter >= 10000)
{
context.HttpContext.Response.StatusCode = 400;
jobDone = true;
context.Result = new ContentResult()
{
Content = "Attack Detected"
};
}
} while (!jobDone);
}
}
catch (System.Exception)
{
}
}
public void OnActionExecuted(ActionExecutedContext context)
{
try
{
var controller = (Controller)context.Controller;
if (controller.User.Identity.IsAuthenticated)
{
LockControllers.RemoveUser(controller.User.Identity.Name);
}
}
catch (System.Exception)
{
}
}
I made list static list of user and sleep their thread until previous task happen.
Is there any better way to manage this problem?
So the original question has been edited so this answer is invalid.
so the issue isn't that the code runs too fast. Fast is always good :) The issue is that the account is going into negative funds. If the client decides to post a form twice that is the clients fault. It maybe that you only want the client to pay only once which is an other problem.
So for the first problem, I would recommend a using transactions (https://en.wikipedia.org/wiki/Database_transaction) to lock your table. Which means that the add update/add a change (or set of changes) and you force other calls to that table to wait until those operations have been done. You can always begin your transaction and check that the account has the correct amount of funds.
If the case is that they are only ever meant to pay once then.. then have a separate table that records if the user has payed (again within a transaction), before processing the update/add.
http://www.entityframeworktutorial.net/entityframework6/transaction-in-entity-framework.aspx
(Edit: fixing link)
You have a few options here
You implement ETag functionality in your app which you can use for optimistic concurrency. This works well, when you are working with records, i.e. you have a database with a data record, return that to the user and then the user changes it.
You could add an required field with a guid to your view model which you pass to your app and add it to in memory cache and check it on each request.
public class RegisterViewModel
{
[Required]
public Guid Id { get; set; }
/* other properties here */
...
}
and then use IMemoryCache or IDistributedMemoryCache (see ASP.NET Core Docs) to put this Id into the memory cache and validate it on request
public Task<IActioNResult> Register(RegisterViewModel register)
{
if(!ModelState.IsValid)
return BadRequest(ModelState);
var userId = ...; /* get userId */
if(_cache.TryGetValue($"Registration-{userId}", register.Id))
{
return BadRequest(new { ErrorMessage = "Command already recieved by this user" });
}
// Set cache options.
var cacheEntryOptions = new MemoryCacheEntryOptions()
// Keep in cache for 5 minutes, reset time if accessed.
.SetSlidingExpiration(TimeSpan.FromMinutes(5));
// when we're here, the command wasn't executed before, so we save the key in the cache
_cache.Set($"Registration-{userId}", register.Id, cacheEntryOptions );
// call your service here to process it
registrationService.Register(...);
}
When the second request arrives, the value will already be in the (distributed) memory cache and the operation will fail.
If the caller do not sets the Id, validation will fail.
Of course all that Jonathan Hickey listed in his answer below applies to, you should always validate that there is enough balance and use EF-Cores optimistic or pessimistic concurrency
I have a requirement where we need a plugin to retrieve a session id from an external system and cache it for a certain time. I use a field on the entity to test if the session is actually being cached. When I refresh the CRM form a couple of times, from the output, it appears there are four versions (at any time consistently) of the same key. I have tried clearing the cache and testing again, but still the same results.
Any help appreciated, thanks in advance.
Output on each refresh of the page:
20170511_125342:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125358:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125410:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125342:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125437:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125358:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125358:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125437:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
To accomplish this, I have implemented the following code:
public class SessionPlugin : IPlugin
{
public static readonly ObjectCache Cache = MemoryCache.Default;
private static readonly string _sessionField = "new_sessionid";
#endregion
public void Execute(IServiceProvider serviceProvider)
{
var context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
try
{
if (context.MessageName.ToLower() != "retrieve" && context.Stage != 40)
return;
var userId = context.InitiatingUserId.ToString();
// Use the userid as key for the cache
var sessionId = CacheSessionId(userId, GetSessionId(userId));
sessionId = $"{sessionId}:{Cache.Select(kvp => kvp.Key == userId).ToList().Count}:{userId}";
// Assign session id to entity
var entity = (Entity)context.OutputParameters["BusinessEntity"];
if (entity.Contains(_sessionField))
entity[_sessionField] = sessionId;
else
entity.Attributes.Add(new KeyValuePair<string, object>(_sessionField, sessionId));
}
catch (Exception e)
{
throw new InvalidPluginExecutionException(e.Message);
}
}
private string CacheSessionId(string key, string sessionId)
{
// If value is in cache, return it
if (Cache.Contains(key))
return Cache.Get(key).ToString();
var cacheItemPolicy = new CacheItemPolicy()
{
AbsoluteExpiration = ObjectCache.InfiniteAbsoluteExpiration,
Priority = CacheItemPriority.Default
};
Cache.Add(key, sessionId, cacheItemPolicy);
return sessionId;
}
private string GetSessionId(string user)
{
// this will be replaced with the actual call to the external service for the session id
return DateTime.Now.ToString("yyyyMMdd_hhmmss");
}
}
This has been greatly explained by Daryl here: https://stackoverflow.com/a/35643860/7708157
Basically you are not having one MemoryCache instance per whole CRM system, your code simply proves that there are multiple app domains for every plugin, so even static variables stored in such plugin can have multiple values, which you cannot rely on. There is no documentation on MSDN that would explain how the sanboxing works (especially app domains in this case), but certainly using static variables is not a good idea.Of course if you are dealing with online, you cannot be sure if there is only single front-end server or many of them (which will also result in such behaviour)
Class level variables should be limited to configuration information. Using a class level variable as you are doing is not supported. In CRM Online, because of multiple web front ends, a specific request may be executed on a different server by a different instance of the plugin class than another request. Overall, assume CRM is stateless and that unless persisted and retrieved nothing should be assumed to be continuous between plugin executions.
Per the SDK:
The plug-in's Execute method should be written to be stateless because
the constructor is not called for every invocation of the plug-in.
Also, multiple system threads could execute the plug-in at the same
time. All per invocation state information is stored in the context,
so you should not use global variables or attempt to store any data in
member variables for use during the next plug-in invocation unless
that data was obtained from the configuration parameter provided to
the constructor.
Reference: https://msdn.microsoft.com/en-us/library/gg328263.aspx
I am writing a remote service for an application using WCF, in which login information is kept in a database. The service requires session establishment through a login or account creation call. There is no ASP involved.
Now, when a client starts a session by calling an exposed IsInitiating method, I check the account data provided against the information on the database and, if it is not correct, I want to invalidate that session and force the client to start again with a call to an IsInitiating method.
Looking at some other questions, I have found pros and cons for two ways to invalidate a session. One does so the hard way, by throwing a FaultException; the other with softer manners, storing accepted session IDs.
Now, the first one, although achieving what I desire, is way too aggressive, given that incorrect logins are part of the normal flow of the application. The second one, on the other hand, allows the client to continue calling non-initiating methods, eventhough they will be rejected, while also incurring in a considerable code overhead on the service due to the added thread safety requirements.
So, the question: Is there a third path which allows the service to invalidate the session initialization and communicate it to the client, so it is forced to make a new IsInitiating call?
A reduced version of the code I have:
[DataContractAttribute]
public class AccountLoginFault
{
public AccountLoginFault (string message)
{
this.Message = message;
}
[DataMemberAttribute]
public string Message { get; set; }
}
[ServiceContract (SessionMode = SessionMode.Required)]
public interface IAccountService
{
[OperationContract (
IsInitiating = true)]
[FaultContractAttribute (
typeof (AccountLoginFault),
ProtectionLevel = ProtectionLevel.EncryptAndSign)]
bool Login (AccountData account, out string message);
}
[ServiceBehavior (
ConcurrencyMode = ConcurrencyMode.Single,
InstanceContextMode = InstanceContextMode.PerSession)]
public class AccountService : IAccountService
{
public bool Login (AccountData account, out string message)
{
UserManager userdb = ChessServerDB.UserManager;
bool result = false;
message = String.Empty;
UserData userData = userdb.GetUserData (account.Name);
if (userData.Name.Equals (account.Name)
&& userData.Password.Equals (account.Password))
{
// Option one
// Get lock
// this.AcceptedSessions.Add (session.ID);
// Release lock
result = true;
} else
{
result = false;
// Option two
// Do something with session context to mark it as not properly initialized.
// message = "Incorrect account name or password. Account provided was " + account.Name;
// Option three
throw new FaultException<AccountLoginFault> (
new AccountLoginFault (
"Incorrect account name or password. Account provided was " + account.Name));
}
return result;
}
}
Throwing an exception is by far the easiest option because WCF enforces that the session cannot be re-used. From what I gather, what you would like the third party component to accomplish comes quite close to this functionality. But, instead of forcing the client to call IsInitialized again, you would force the client to create a new connection. This looks like a very small difference to me.
An alternative would be to have a private variable bool _authorised and check this variable at every method call.
Do something like this:
public ConnectResponseDTO Connect(ConnectRequestDTO request) {
...
if(LoginFailed)
OperationContext.Current.OperationCompleted += FaultSession;
}
private void FaultSession(object sender, EventArgs e) {
var context = (OperationContext) sender;
context.Channel.Abort();
}
This will fault the channel and the client will havce to reesatablish the session.