I am working on Asp.Net core with Azure service fabric. We have stateless service too. On the Azure portal we have 5 nodes and for each node we have 2 instances. I have implemented logging mechanism. I am using Dependency injection too.
Whenever worker role pick up the record from database,there are few values till the time that record is processed because I need to log those values in my logging framework. I am using this value to track telemetry across all systems.
Currently I have created an object for such values and assigning this object to logging class whenever I picked up the document.
It's working fine when I run in local environment because I have one node and one instance. Once I moved to Azure, it starts overwriting values.
How to avoid it? How can I make sure that value should not change till it gets processed?
For the Asp.Net we have session, do we have anything such like for worker role?
Thanks for your inputs and guidance.
I think your problem is related to the Service Fabric hosting model
Service fabric has two models, Shared Process and Exclusive process.
The Shared Model is the default when you don't specify a hosting model when creating the service, this model will use a single process to host multiple replicas from same CodePackage(executable). That means, when you start multiple replicas on same node, it will create one process and initialise any new replica in this process. This is a big problem when you use static and singleton objects, because all replicas will see the same object, but on you code logic it supposed to be different for each instance of your service.
The Exclusive Process will create a new process every time a new replica is placed on same node. So if you are using static or singleton objects, this should be the model to use.
Related
There are two questions on here which both have highly-upvoted, but seemingly contradictory answers.
What is the actual scope of a static variable?
In my case let's say I have a WCF service running under IIS. Several servers with a load balancer in front. One Site on each server, one App Pool as well. Let's say there's a static variable stored in the class which implements the service.
Will the variable persist across the worker process only? The app pool? The server? I tried to research it but found two competing answers on here.
Under this post:
IIS app pools, worker processes, app domains
The reply says "Each worker process is a different program that's run your site, have their alone [own?] static variables"
Yet under this post:
Lifetime of ASP.NET Static Variable
The reply says "The static variables are per pool"
Maybe I just don't understand the posts, but they seem contradictory?
It appears I have several worker processes running when I checked. Hence my question.
Any help would be appreciated. I am trying to refactor some stuff away from using static variables since it seems risky and exposes concurrency problems but I am very uncomfortable proposing changes without understanding the current behaviour. Thanks!
Static variables persist for the life of the application domain. So there are 2 things that will reset your static variables is an application domain restart or the use of a new class
Each application pool can have multiple worker processes,
Each worker process will run different application instances.
Each instance of an application has a separate AppDomain -
one per application instance.
Static variables are lost when IIS restarts your asp.net application when any of the below happens
The pool decide that need to make a recompile.
You open the app_offline.htm file
manual restart of the app pool
The pool has reached some limits that is defined on the server and restart.
Restart iis
Static variables are not thread safe, and you need to use the lock keyword if you access them from different threads.
Since an app restart will reset your statics variables, and you want to persist data across your application life time, you should store the data persistently in a DB or a file. You can store information per-user in Session State with a database session state mode.
ASP.NET Application State/Variables will not help as they are stored in memory, hence they are not persistent, hence they are lost on app domain restart too.
When we are defining a static field in web,wcf it will be only share's with the app domain limit only.
Recently, I have encountered an issue where I want to make my existing Windows service to run in a SF cluster.
However, this service does some registry operations, and it is dependent on it.
At SF, this is meaningless (as the service can be shutdown and re-run on another node). What are my options to make a smooth transition?
Use DB instead of a registry? Any other ideas (as in some cases the service stores data in the registry in case an error will occur in DB)?
You could just create a Stateful service for this. The Service would store the (registry) values in a reliable dictionary. This way the state would always be available to the service.
I have a web application which consist of 1 web role and 1 worker role instance. The webrole basically aggregates some blobs on blob storage and the worker comes and sweeps the stored files and processes them.
The problem is if I were to use 2 worker role instances, there may be some duplicate results due to the fact that the blobs only get removed on successful processing. In order to avoid this I decided to use Hangfire.
What I plan on doing is, when the web role gets the request, not only it will save the file to blob storage but also enqueue the processing task for that file. Then Hangfire worker threads will process it.
This raises another question: where should I deploy hangfire?
If I deploy hangfire onto web role instances, I will be able to access the UI since it has IIS, but I won't be able to isolate the resources used for background processing from the web role itself. If I deploy it onto worker role instances, it won't be able to serve the UI, since there's no IIS.
Is there a way to have web role instance fire the tasks but have the worker instance look for and consume them? If so, how?
is it possible to start Worker Role Instances dynamically from a c# application running on azure windows vm?
in azure i have a Medium virtual machine, on it there is a c# console application that runs automatically on 11:00PM daily and it keeps processing data until about 7:00AM, my data is getting bigger and thus needs more time to be processed and i need to finish processing all data before 5:00AM.
is it possible to use Worker rule to run an instance of the application an pass it a part of the data to process?
note that my process makes http requests to external websites and the processed data gets written to a mongodb.
i am not sure where to start, and i am not sure if using worker rules is better than creating couple vms.
in general how would you solve this problem with the tools available on azure?
Is it possible to start Worker Role Instances dynamically from a c#
application running on azure windows vm?
Absolutely Yes. In order to do so, you would need to consume Service Management API. You could either write code yourself to consume this API or there's a Windows Azure Management Library available to do so which you can install from Nuget. To learn more about this API, you may find this blog post useful: http://www.bradygaster.com/post/getting-started-with-the-windows-azure-management-libraries.
Generally speaking Worker Roles are equivalent to Windows Services in the sense that both are used to perform background tasks. Since you're performing background tasks through your VM, I can't see any reason why you can't do the same though a Worker Role instance. My recommendation would be to go through tutorials available online or Windows Azure Platform Training Kit to become familiar with Worker Role concepts and how you could make use of them in your project.
For your specific scenario you may want to look at the auto scale rules that are now available; In the configuration for the worker role, in the Azure Management Console, you can specify, for example, that you want at least two workers running between certain times each day.
The Service Management API gives you a lot more control, but the auto scale is quick and easy to start with.
Incidentally, if the work your worker has to do can be divided into atomic chunks, then you may want to use a storage queue to write all the tasks to and then have the worker role pull tasks off that queue. You can then configure the autoscale to monitor the length of the queue and start and stop workers as required.
I am developing a distributed application where each distributed site will run a 2-tier winform based client-server application (there are specific reasons for not going into web architecture). The local application server will be responsible for communicating with a central server and the local workstations will in turn communicate with the local server.
I am about to develop one or more windows services to achieve the following objectives. I presume some of them will run on the local server while some on the local workstations. The central server is just a SQL Server database so no services to be developed on that end, just DML statements will be fired from local server.
Windows Service Objectives
Synchronise local data with data from central server (both end SQL Server) based on a schedule
Take automatic database backup based on a schedule
Automatically download software updates from vendor site based on a schedule
Enforce software license management by monitoring connected workstations
Intranet messaging between workstations and/or local server (like a little richer NET SEND utility)
Manage deployment of updates from local server to workstations
Though I have something on my mind, I am a little confused in deciding and I need to cross-check my thoughts with experts like you. Following are my queries -
How many services should run on the local server?
Please specify individual service roles.
How many services should run on the workstations?
Please specify individual service roles.
Since some services will need to query the local SQL database, where should the services read the connection string from? registry? Config file? Any other location?
If one service is alotted to do multiple tasks, e.g. like a scheduler service, what should be the best/optimal way with respect to performance for managing the scheduled tasks
a. Build .NET class Library assemblies for the scheduled tasks and spin them up on separate threads
b. Build .NET executable assemblies for the scheduled tasks and fire the executables
Though it may seem as too many questions in a single post, they are all related. I will share my views with the forum later as I do not want to bias the forum's views/suggestions in any way.
Thanking all of you in advance.
How many services should run on the local server?
It depends on many factors, but I would generally go for as less services as possible, because maintaining and surveilling one service is less work for the admin than having many services.
How many services should run on the workstations?
I would use just one, because this will make it a single point of failure. The user will notice if the service on the workstation is down. If this is the case only one service needs to be started.
Since some services will need to query the local SQL database, where should the services read the connection string from? registry? Config file? Any other location?
I would generally put the connection string in the app.config. The .NET framework also offers facilities to encrypt the connection string.
If one service is alotted to do multiple tasks, e.g. like a scheduler service, what should be the best/optimal way with respect to performance for managing the scheduled tasks a. Build .NET class Library assemblies for the scheduled tasks and spin them up on separate threads b. Build .NET executable assemblies for the scheduled tasks and fire the executables
b. is easier to design and implement. It gives you the possibility to use the windows scheduler. In this case you will need to think of the problem, when the windows scheduler starts the executable, when the previous start has not finished yet. This results to two processes, which may do the same. If this is not a problem then stay at that design. If it is a problem, consider solution a.
For solution a. have a look on Quartz.NET which offers you a lot of advanced scheduling capabilities. Also considering using application domains instead of threads to make the service more robust.
If you don't get admin rights on the local server, think about means to restart the service without the service control manager. Give some priviledged user the possibility to re-initialize the service from a client machine.
Also think about ways to restart just one part of a serivce, if one service is doing mulitple tasks. For instance the service is behaving strangely, because the update task is running wrong. If you need to restart the whole service to repair this, all users may become aware of it. Provide some means to re-intialize only the update task.
Most important: Don't follow any of my advices if you find an easier way to achieve your goals. Start with a simple design. Don't overengineer! Solutions with (multiple) services and scheduling tend to explode in their complexity with each added feature. Especially when you need to let the services talk to each other.
I dont think there is one answer to this, some would probably use just one service and other would modularize every domain into a service and add enterprise transaction services etc. The question is more of a SOA one than C# and you might consider read up on some SOA books to find your pattern.
This does not answer your questions specifically, but one thing to consider is that you can run multiple services in the same process. The ServiceBase.Run method accepts an array of ServerBase instances to start. They are technically different services and appear as such in the service control manager, but they are executed inside the same process. Again, just something to consider in the broader context of your questions.