I have been looking at Azure functions with vs 2017 from Performance improvement point of view. Dependency injection is something that's not currently supported by azure function. But if we use workaround
(something like this https://blog.wille-zone.de/post/azure-functions-proper-dependency-injection/ ) and perform dependancy injection from Static Construction of a function, what impact does it have on performance? Especially with two hosting plans.
1)Consumption Plan: If I understand correctly, it is possible that every request is seperate and will create a new host in this plan. Does this mean, that every time static constructor will be called? and make all objects instantiate again? in that case, should dependency injection be avoided for Consumption Plan?
2)App Service Plan: This will have a dedicated vm on which Function will run, and provided "Always On" will be enabled, function will only be initiated once . In this case , does dependency injection make more sense? or still the function will exit context once trigger is complete, and every time new instances will be created?
I couldn't find a proper explanation about this possibility(if at all it's a possibility) . anyone has an idea?
Consumption Plan doesn't mean that you will get a new host on every request. The existing hosts will be reused for subsequent requests unless a) they are too busy, scale out kicks in and you get a new host, or b) there are no requests for several minutes, and your only host gets recycled.
Overall, I don't see such dependency injection to be a bottleneck in most scenarios.
Related
I'm currently investigating some code which has a cache layer which at the bottom level uses the MemoryCache class. This is a c# Windows Service app so not web/IIS. There is a section of the code which spawns off a number of threads in which it creates and executes some code in a POC to perform some calculations. These are then stored in the above mentioned cache layer. What is been seen is it looks like cached values seem to be getting stored per thread and not at apication level. I thought that MemoryCache was a Singleton that would sit out side of the individual threads.
Can anybody confirm this behaviour would be expected?
Many thanks for any comments.
A MemoryCache is thread-safe but there is no reason to assume it's a singleton. If you want different threads to access the same MemoryCache instance you need to give them all a reference to the same instance (either as a singleton (really bad) static (still bad) or through argument passing as dependency injection (good)).
The simple way to do it (which does use global state) is to access the default memory cache:
var cache = MemoryCache.Default; // not really a good idea and harder to test, works
You can find the specific docs here. Make sure to configure it in your app/web.config file.
We are using the petapoco repository pattern (similar to this blog post). As the page loads we open up the repository, run the query, dispose and then carry on processing. This is fine on light pages, but when this happens a few times in the page we get quite significant performance degradation.
I had, perhaps wrongly, assumed that connection pooling would cope with this, which is enabled.
I ran a couple of tests.
The page it's on (it's an aspx page) takes around 1.2 seconds to load as it is at the moment. The page is running around 30 database queries...and, looking at the profiler, is doing a login and logout per query (even with connection pooling).
If I persist the connection and don't close until the page ends, this drops to around 70ms, which is quite a significant saving.
Perhaps we need to keep the Database object hanging around for the request, but I didn't think PetaPoco had this much of an overhead...particularly with the connection pooling.
I have created a test app to demonstrate it.
This demonstrates that loading a user 1000 times takes 230ms if the repository is reused, but takes 3.5seconds if the repository is recreated every time.
Your usage of connection pooling is breaking best practices.
It says nowhere to get rid of it after every statement. I normally keep a connection / repository around while doing processing and only close it when my function is finished (with MVC).
Yes, even the connection pool has overhead and you seem to be really bound on making that show.
What I always do is create a single instance of my repository per request. Because I develop almost exclusively using the MVC pattern, that means I create a private class level variable in each of my controllers and use it to service any requests within my Action methods. Translated to WebForms (ASPX), that means I would create one in the BeforeLoad (or whatever the event just before PageLoad is) and pass it around as needed. I don't think keeping a class-level instance is a good idea for Webforms though, but I can't remember enough to be sure.
The rule of thumb is to use one instance of your repo (or any other type of class really) for the entirety of your request, which is usually a page load or Ajax call. And for the reasons you've pointed out.
Remember: information on the internet is free, and you get what you pay for.
Objective: To have one DbContext instance per Workflow instance, for isolation of changes to prevent concurrency failures and to allow Transactions.
Environment:
I use Unity in an N-tier (Services, Repository, EF5) application and re-use all the same interfaces/classes across the website and WF (Unity.WF, with custom Factory to add Unity RegisterTypes). I have the website's EF object running under the PerRequestLifetimeManager which works well, I'm trying to do the same for my WF instances.
I have one server running WF services (WF 4.5 with a WCF endpoint), soon this will need to be two (if that makes any difference to anyone's answers). I have multiple WF definitions (xamlx workflows), one in particular will soon be called 1000s of times a day to process files uploaded by clients. The processing time can be anywhere between <1 min to 1 hour depending on the amount of data uploaded, because of this is have set to allow the incoming requests to persist immediately and then resume when the server is free.
Problem:
The documented LifetimeManager for EF with Unity.WF is the HierarchicalLifetimeManager, but this seems to use the same EF instance across all running WF instances of the same WF definition.
I have tried several WCF Unity Lifetime Managers, but they all rely on the OperationContext.Current which is only available if the WF does not persist, this is not going to work for my situation.
So I tried the Microsoft WF Security Pack which claimed to be able to create/resurrect the OperationContext with an activity, this also does not work.
Surely this is a common problem that others have faced?
My next move would be to create my own LifetimeManager and somehow know about the workflow instance id and return the correct EF object from a Dictionary, but how would I get that id when the code is trying to resolve a dependency 2 levels deep in a repository constructor?
Thanks
Update:
Firstly, I could use a NativeActivity container and set the DataContext in the properties as in http://blogs.msdn.com/b/tilovell/archive/2009/12/20/workflow-scopes-and-execution-properties.aspx. This does provide using the same DataContext for my child Activities.
My problem did not go away, it always failed in the same Activity, i suspect it is because of Lazy Loading. So, i finally used a lock in the Execute function to ensure that only 1 WF can run this activity at once with a volatile static bool to check against so that the other WFs can Delay and not block the maxConcurrentInstances.
I hope this helps someone else.
I have a need to call external web service on certain events in my application. I don't want to modify my application and make any dependencies of that external web service. So, I need to think of a way to do this with some sort of external component.
One possible approach is that I make database view which will get filled up when some events in my application occurs. Then I will set up trigger on that view which will call CLR function. In that CLR function I will make call to external web service. By doing this, I will get "real-time" integration which is good. But, this approach has downsides. Major one is that it seems that calling web service from CLR is not a good idea since it will block main SQL thread (?!) until CLR receives some answer.
Until now, I have only found that setting this property will help with performance issues:
System.Net.ServicePointManager.DefaultConnectionLimit = 9999
More about it you can find here.
Now, since you know my needs (that is real-time or at least close-to-real-time integration without any calls from my application to exteranl web service) is there some better way to do it?
One other approach I can think of is having some service which will periodically check for changes in my DB that needs to trigger calls to external web service. Once this service detects such change, it will call web service and transfer data. This is not true real-time integration of course. I must admit that, except for performance issues, I like having triggers and CLR much more since it guarantees real-time integration and has no affect on my application whatsoever.
I am not sure that I would agree with the design of moving the web-service call to a database. However, I am sure there are reasons as to why you wouldn't want to change the application.
Here are a couple of options that you can try -
1) Instead of a database, and CLR making web-service calls, use a message queue. NServiceBus is a good choice for passing event occurrences as message, which can trigger this call
2) If you are stuck with using SQL server to store the events, look at SQL server Service broker
At the new place I am working, I've been tasking with developing a web-application framework. I am new (6 months ish) to the ASP.NET framework and things seem pretty straight forward, but I have a few questions that I'd like to ask you ASP professionals. I'll note that I am no stranger to C#.
Long life objects/Caching
What is the preferred method to deal with objects that you don't want to re-initialize every time a page is it? I noticed that there was a cache manager that can be used, but are there any caveats to using this? For example, I might want to cache various things and I was thinking about writing a wrapper around the cache that prefixed cache names so that I could implement different caches using the same underlying .NET cache manager.
1) Are there any design considerations I need to think about the objects that I am want to cache?
2) If I want to implement a manager of some time that is around during the lifetime of the web application (thread-safe, obviously), is it enough to initialize it during app_start and kill it in app_end? Or is this practiced frowned upon and any managers are created uniquely in the constructor/init method of the page being served.
3) If I have a long-term object initialized at app start, is this likely to get affected when the app pool is recycled? If it is destroy at app end is it a case of it simply getting destroyed and then recreated again? I am fine with this restriction, I just want to get a little clearer :)
Long Life Threads
I've done a bit of research on this and this question is probably redundant. It seems it is not safe to start a worker thread in the ASP.NET environment and instead, use a windows service to do long-running tasks. The latter isn't exactly a problem, the target environments will have the facility to install services, but I just wanted to double check that this was absolutely necessary. I understand threads can throw exceptions and die, but I do not understand the reasoning behind prohibiting them. If .NET provided a a thread framework that encompassed System.Thread, but also provided notifications for when the Application Server was going to recycle the App-Pool, we could actually do something about it rather than just keel over and die at the point we were stopped.
Are there any solutions to threading in ASP.NET or is it basically "service"?
I am sure I'll have more queries, but this is it for now.
EDIT: Thankyou for all the responses!
So here's the main thing that you're going to want to keep in mind. The IIS may get reset or may reset itself (based on criteria) while you're working. You can never know when that will happen unless it stops rendering your page while you're waiting on the response (in which case you'll get a browser notice that the page stopped responding, eventually.
Threads
This is why you shouldn't use threads in ASP.NET apps. However, that's not to say you can't. Once again, you'll need to configure the IIS engine properly (I've had it hang when spawning a lot of threads, but that may have been machine dependent). If you can trust that nobody will cause ASP.NET to recompile your code/restart your application (by saving the web.config, for instance) then you will have less issues than you might otherwise.
Instead of running a Windows service, you could use an ASMX or WCF service which also run on IIS/.NET. That's up to you. But with multiple service pools it allows you to keep everything "in the same environment" as far as installations and builds are concerned. They obviously don't share the same processpool/memoryspace.
"You're Wrong!"
I'm sure someone will read this far and go "but you can't thread in ASP.NET!!!" so here's the link that shows you how to do it from that venerable MSDN http://msdn.microsoft.com/en-us/magazine/cc164128.aspx
Now onto Long life objects/Caching
Caching
So it depends on what you mean by caching. Is this per user, per system, per application, per database, or per page? Each is possible, but takes some contrivance and complexity, depending on needs.
The simplest way to do it per page is with static variables. This is also highly dangerous if you're using it for user-code-stuff because there's no indication to the end user that the variable is going to change, if more than one users uses the page. Instead, if you need something to live with the user while they work with the page in particular, you could either stuff it into session (serverside caching, stays with the user, they can use it across multiple pages) or you could stick it into ViewState.
The cachemanager you reference above would be good for application style caching, where everyone using the webapp can use the same datastore. That might be good for intensive queries where you want to get the values back as quickly as possible so long as they're not stale. That's up to you to decide. Also, things like application settings could be stored there, if you use a database layer for storage.
Long term cache objects
You could initialize it in the app_start with no problem, and the same goes for destroying it at the end if you felt the need, but yes, you do need to watch out for what I described at first about the system throwing all your code out and restarting.
Keel over and die
But you don't get notified when you're (the app pool here) going to be restarted (as far as I know) so you can pretty much keel over and die on anything. Always assume the app is going to go down on you before your request, and that every request is the first one.
Really tho, that just leads back into web-design in the first place. You don't know that this is the first visitor or the fifty millionth (unless you're storing that information in memory of course) so just like the app is stateless, you also need to plan your architecture to be stateless as much as possible. That's where web-apps are great.
If you need state on a regular basis, consider sticking with desktop apps. If you can live with stateless-ness, welcome to ASP.NET and web development.
1) The main thing about caching is understanding the lifetime of the cache, and the effects of caching (particularly large) objects in cache. Consider caching a 1MB object in memory that is generated each time your default.aspx page is hit; and after a year of production you're getting 10,000 hits an hour, and object lifetime is 2 hours. You can easily chew up TONS of memory, which can affect performance, and also may cause things to be prematurely expired from the cache, which in turn can cause other issues. As long as you understand the effects of all of this, you're fine.
2) Starting it up in Application_Start and shutting it down in Application_End is fine. You can also implement a custom HttpApplication with an http module.
3) Yes, when your app pool is recycled it calls Application_End and everything is shutdown and destroyed.
4) (Threads) The issue with threads comes up in relation to scaling. If you hit that default.aspx page, and it fires up a thread, and that page gets hit 10,000 in 2 minutes, you could potentially have a ton of threads running in your application pool. Again, as long as you understand the ramifications of firing up a thread, you can do it. ThreadPool is another story, the asp.net runtime uses the ThreadPool to process requests, so if you tie up all the threadpool threads, your application can potentially hang because there isn't a thread available to process the request.
1) Are there any design considerations I need to think about the objects that I am want to cache?
2) If I want to implement a manager of some time that is around during the lifetime of the web application (thread-safe, obviously), is it enough to initialize it during app_start and kill it in app_end? Or is this practiced frowned upon and any managers are created uniquely in the constructor/init method of the page being served.
There's a difference between data caching and output caching. I think you're looking for data caching which means caching some object for use in the application. This can be done via HttpContext.Current.Cache. You can also cache page output and differentiate that on conditions so the page logic doesn't have to run at all. This functionality is also built into ASP.NET. Something to keep in mind when doing data caching is that you need to be careful about the scope of the things you cache. For example, when using Entity Framework, you might be tempted to cache some object that's been retrieved from the DB. However, if your DB Context is scoped per request (a new one for every user visiting your site, probably the correct way) then your cached object will rely on this DB Context for lazy loading but the DB Context will be disposed of after the first request ends.
3) If I have a long-term object initialized at app start, is this likely to get affected when the app pool is recycled? If it is destroy at app end is it a case of it simply getting destroyed and then recreated again? I am fine with this restriction, I just want to get a little clearer :)
Perhaps the biggest issue with threading in ASP.NET is that it runs in the same process as all your requests. Even if this weren't an issue in and of itself, IIS can be configured (and if you don't own the servers almost certainly will be configured) to shut down the app if it's inactive (which you mentioned) which can cause issues for these threads. I have seen solutions to that including making sure IIS never recycles the app pool to spawning a thread that hits the site to keep it alive even on hosted servers