I have .net4.5 WebRole project running on local IIS (dev purposes) and on Azure as Cloud service.
This project is handling request from users. Requests for one user can came from many clients types, for example from WWW (another project hosted on IIS and Azure), android app and windows application. There is one layer which translates each request type to some general request with User as parameter. User is an entity.
To communicate with database (MS SQL EXPRESS or AzureSQL) I'm using EntityFramework.
The problem is with synchronization. I'd like to have synchronized request handling per user. There could be thousands of users, some not active, some hardly ever active and some active all the time. I want to avoid slowdown caused by some mutexes.
This code would probably work, but will be extremely slow:
private Object allUsersMutex = new Object();
private Dictionary<long, Object> perUserMutex = new Dictionary<long, Object>();
private Object getUserMutex (User user) {
synchronized (allUsersMutex) {
if (!perUserMutex.containsKey(user.ID)) {
perUserMutex[user.ID] = new Object();
}
return perUserMutex[user.ID];
}
}
and then, on all methods I could get mutex by this method and synchronize on it.
Is there any mechanism helping with something like this?
Related
Sorry about the vague title, it's rather hard to explain. I have the following setup:
I'm running a .NET Core 2.2 Web API hosted in Service Fabric.
Part of this API's responsibilities is to monitor an external FTP storage for new incoming files.
Each file will trigger a Mediator Command to be invoked with processing logic.
I've implemented a hybrid solution based on https://learn.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice and https://blog.maartenballiauw.be/post/2017/08/01/building-a-scheduled-cache-updater-in-aspnet-core-2.html. In essence this is an IHostedService implementation that is registered in the Startup.cs of this API. Its basically a background service running in-process.
As for the problem. The solution above works fine on a 1-node cluster, but causes "duplicates" to be processed when running on a 5-node cluster. The problem lies in the fact that on a 5-node cluster, there are ofcourse 5 identical ScheduledTasks running and will all access the same file on the FTP at the same time.
I've realised this is caused somewhat by improper separation of concerns - aka the API shouldn't be responsible for this, rather a completely separate process should handle this.
This brings me to the different services supported on Service fabric (Stateful, Stateless, Actors and Hosted Guest Exe's). The Actor seems to be the only one that runs single-threaded, even on a 5-node cluster. Additionally, an Actor doesn't seem to be well suited for this kind of scenario, as it needs to be triggered. In my case, I basically need a daemon that runs all the time on a schedule. If I'm not mistaken, the other stateful/stateless services will run with 5 "clones" as well and just cause the same issue as I currently have.
I guess my question is: how can I do efficient background processing with Service Fabric and avoid these multi-threaded/duplicate issues? Thanks in advance for any input.
In service farbic you have 2 options with actors:
Reliable actor timers
Reliable actor reminders
You can use the state to determine if the actor has processed your ftp file.
Have a look at this blog post, to see how they used a reminder to run every 30 seconds.
It's important that the code in your actor allows reantrancy.
Basically because the actors are reliable, your code might get executed multiple times and be canceled in the middle of an execution.
Instead of doing this:
public void Method()
{
_ftpService.Process(file);
}
Consider doing this:
public void Method(int fileId)
{
if (_ftpService.IsNotProcessed(fileId))
{
_ftpService.Process(file);
_ftpService.SetProcessed(fileId);
}
}
If your actor has trouble disposing, you might want to check if you are handling cancelationtokens in your code. I never had this issue, but we are using autofac, with Autofac.ServiceFabric to register our actors with RegisterActor<T>() and we have cancelationtokens in most of our logic. Also the documentation of CancellationTokenSource can help you.
Example
public Ctor()
{
_cancelationTokenSource = new CancellationTokenSource();
_cancellationToken= _cancelationTokenSource.Token;
}
public async Task SomeMethod()
{
while(/*condition*/)
{
_cancellationToken.ThrowIfCancellationRequested();
/*Other code*/
}
}
protected override async Task OnDeactivateAsync()
{
_cancelationTokenSource.Cancel();
}
We are in the process of migrating an app from a Server 2008 set of servers to Server 2016, and since this app has ~75 private MSMQ queues, I wrote a very basic C# utility (just a console app) to get the list from our production server and recreate them on the new 2016 server via the following:
//connect to the specified server to pull all existings queues
var queues = MessageQueue.GetPrivateQueuesByMachine("[production server name]");
var acl = new AccessControlList();
acl.Add(new AccessControlEntry
{
EntryType = AccessControlEntryType.Allow,
GenericAccessRights = GenericAccessRights.All,
StandardAccessRights = StandardAccessRights.All,
Trustee = new Trustee("Everyone")
});
acl.Add(new AccessControlEntry
{
EntryType = AccessControlEntryType.Allow,
GenericAccessRights = GenericAccessRights.All,
StandardAccessRights = StandardAccessRights.All,
Trustee = new Trustee("Network Service")
});
foreach (var queue in queues)
{
var newQueue = MessageQueue.Create($".\\{queue.QueueName}", true);
newQueue.SetPermissions(acl);
newQueue.Label = queue.QueueName;
}
When I start running our web app on the new server and execute an action that places a message on the queue, it fails with System.Messaging.MessageQueueException: Access to Message Queuing system is denied, despite the Everyone ACL entry that is confirmed added to the queue.
The really strange part I'm running into though, is if I delete the queue in question and recreate it manually on the server with the same Everyone has full control permissions, the code works successfully. I've compared the properties of an auto-generated queue to a manually created one and everything is 100% identical, so it makes zero sense why this would occur.
Any suggestions? I'm at a loss, but trying not to have to create all of these queues manually if I can avoid it.
After a lot of back and forth testing, I reached out to Microsoft Support and one of their engineers has confirmed there's a bug of some kind on the .Net side with creating queues. We confirmed everything was identical, but the only time permissions worked was if the queue was created manually via the Computer Management snap-in. Creating it in code, regardless of permissions, caused it to not work correctly for multiple accounts.
Hopefully this helps anyone else trying to do this!
I'm writing a pretty big web system with asp.net MVC, which involves sending data in Real Time to numerous users, based on what they are subscribed to.
My team and I decided to use SignalR for that, and I am in charge of implementing it in the system.
In our case, that a user picks a group to join, and then picks 1 Thing to work on.
For that, I'm saving all the users in a DB. I'll be using the SignalR Groups to handle the first category, and when I need to push a message to a specific user (for the other thing hes picking) I'll just get his ConnectionID from the DB.
Here's the problem - every time the page is refreshed (for instance, when the user picks a group to join) he gets a new connectionID. And now he won't see anything that's pushed to him.
I saw that in the SignalR beta, and on version 2 (I only have 1.1.1 on the computer I'm working on) you can make your own IUserIdProvider (IUserIdPrefixGenerator in the beta), or IUserConnectionIdFactory etc. So I can give him the I'd I want him to have, but I don't seem to have any of those in my version of SignalR.
There are many ways to solve this, but perhaps one of the simplest ways is to associate the new connection id with the user (maybe they still have the other connection open in a different tab). This can be done using any combination of IP Address, User-Agent, Headers, or location. Another good candidate for this is either to use sessions, or just a simple identifier cookie (which is more or less what a session would do anyway).
I'll often use GUIDs for this and then create a table in the database when a new identifier cookie is created. Every time the user "refreshes" or opens a new tab, the cookie can be read in JS and sent with the hub.connect(). Then, you can create an association between the new connection id and the existing identifier cookie.
I'd highly recommend figuring out a different way to maintain your users' persisted connections. Typically, I keep all of my users connection ids stored in a concurrent dictionary to allow for thread-safe access to the collection. I remove the users from the dictionary whenever a disconnection event occurs and I add them whenever a connection event occurs.
SignalR will manage your users' connections for you. For you to do it in the database and fall out of sync with SignalR circumvents a lot of the mechanics that make it work correctly in the first place.
private readonly static Lazy<App> _instance = new Lazy<App>(
() => new App(GlobalHost.ConnectionManager.GetHubContext<AppHub>().Clients));
private readonly ConcurrentDictionary<string, User> _users = new ConcurrentDictionary<string, User>(StringComparer.OrdinalIgnoreCase);
private IHubConnectionContext Clients { get; set; }
public App(IHubConnectionContext clients)
{
Clients = clients;
}
I have a Worker Role that executes code (fetching data and storing it to Azure SQL) every X hours. The timing is implemented using a Thread.Sleep in the while(true) loop in the Run method.
In the Web Role I want to have the abillity to manualy start the code in Worker Role (manualy fecth and store data in my case). I found out that the whole Worker Role can be restarted using the Azure Management API but it seems like an overkill, especialy looking at all the work needed around certificates.
Is there a better way to restart Worker Role from Web Role or have the code in Worker Role run on demand from the Web Role?
Anything like posting an event to an Azure Queue, posting a blob to Azure Blobs, changing a record in Azure Tables or even making some change in SQL Azure will work - the web role will do the change and the worker role will wait for that change. Perhaps Azure Queues would be the cleanest way, although I'm not sure.
One very important thing you should watch for is that if you decide to use polling - like query a blob until it appears - you should insert a delay between the queries, otherwise this code:
while( true ) {
if( storage.BlobExists( blobName ) ) {
break;
}
}
will rush into the storage and you'll encounter outrageous transaction fees. In case of SQL Azure you will not see any fees, but you'll waste the service capacity for no good and this will slow down other operations you queue to SQL Azure.
This is how is should be done:
while( true ) {
if( storage.BlobExists( blobName ) ) {
break;
}
// value should not be less that several hundred (milliseconds)
System.Threading.Thread.Sleep( 15 * 1000 );
}
Well I suggest you use Azure Fluent Management (which uses the Service Management API internally). Take a look at the "Deploying to Windows Azure" page.
What you will want to do is the following:
Cloud Service: mywebapp.cloudapp.net
Production slot
Role: MyMvcApplication
Cloud Service: mybackgroundworker.cloudapp.net
Production slot
No DEPLOYMENT
So you would typically have a Cloud Service running with a Web Role and that's it. What you do next is create the Worker Role, add your code, package it to a cspkg file and upload it to blob storage.
Finally you would have some code in your Web Role that can deploy (or remove) the Worker Role to that other Cloud Service by downloading the package locally and then running code similar to this:
var subscriptionManager = new SubscriptionManager(TestConstants.SubscriptionId);
var deploymentManager = subscriptionManager.GetDeploymentManager();
deploymentManager
.AddCertificateFromStore(Constants.Thumbprint)
.ForNewDeployment(TestConstants.HostedServiceName)
.SetCspkgEndpoint(#"C:\mypackage")
.WithNewHostedService("myelastatestservice")
.WithStorageAccount("account")
.AddDescription("my new service")
.AddLocation(LocationConstants.NorthEurope)
.GoHostedServiceDeployment();
I would like to know in how the Session locking mechanism work and how I can lock a variable and its respective child objects for multiple reads/exclusive write in a server farm environment.
Scenario
The web farm will use 3 Windows 2003 servers, each server as its own app domain for the Web application. The sesion object is saved on SQL Server 2005.
The object to use in my web app is at follows:
MySampleClass = class
{
public string Id;
public Dictionary<string, CustomClass1> Data;
public List<string> Commands;
public CustomClass2 MoreData;
}
where customClass 1 and 2 are business classes that are part of the application.
now in one of the web pages, code will look like:
Session["myObj"] = new MySampleClass();
in other pages:
MySampleClass = (MySampleClass)Session["myObj"];
//Is Session["myObj"] accessed in a multiple reader/exclusive writer mode? if so is it locking just the variable or the whole contents?
MySampleClass.Commands.Add("sample string");
MySampleClass.Commands.RemoveAt(0);
//More CRUD changes
//Are these changes available to other pages as soon as I finish the CRUD changes?
let me know if you need more details
Have a look here under Locking Session-Store Data. Basically, unless your page says it wants read-only session access, the session is locked on the DB and other callers for that session will poll at 1/2sec interval until it is unlocked.
There is also a detailed explanation on "Session State Providers" on msdn.
It covers the algorithm and rules used when using out-of-process session state providers in ASP.NET.