We have a business component which reads and inserts data from multiple tables.It uses entity framework.
The business component is consumed by MVC web application and a console application.
We are observing huge difference in performance in these 2 cases. Its very slow from Console application.
The component processes a particular case in 10 seconds when invoked from web application while its taking around 100 seconds from Console application. Its happening in production environment
We noticed some difference in test environment, but the difference was not this much(10 times).
Can someone please suggest the reasons that may be causing these and steps we can take to improve performance.
Thanks in advance,
Rohit
imho the reason is that, in the web application, the "context views" are built and loaded once, at the starting of the web app. So the instanciation of a DbContext is very fast.
A contrario, for the console app, the "context views" are rebuild each time the console app is launched. This view building can cost very much, depending on the complexity of the model.
The building of said "context views" can be seen as the initialization of the ORM. This performance issue is particularly true with EF 4.x.
Please read EF Perfomance considerations
Related
I have a webapp written in C#, based on ASP.NET.
It loads (with LoadLibraryEx) an unmanaged DLL written with C++Builder.
As I have performance issues I made some tests and comparisons, running always the same method in the DLL, for many times, obtaining average times.
I discovered that the DLL:
loaded by a C++Builder console application, takes: 4.922 s
loaded by a C# console application, takes: 5.484 s
loaded by a minimal C# ASP.NET application hosted on IIS 7.5, takes: 9.551 s
As you see case 1 and 2 have very similar performance.
Why case 3 is so slow? Maybe IIS slows down the DLL?
For it I ran a profiling with JetBrains dotTrace:
Is there any suggested IIS tuning?
Is there any suggested fast alternative to IIS for hosting ASP.NET webapps?
Should I consider porting the webapp to C++Builder?
EDIT:
I tried to migrate my webapp to ASP.NET Core (instead of .NET Framework) and run it on Kestrel. It takes 6.042 seconds. So, not so much overhead compared to the console app.
It seems that IIS is the culprit... why?
One cause that may explain your results is that IIS does not (by default) load your web app until the first time it is accessed. This can cause it to take longer as it will spin up the entire app after you navigate to the site whereas a console app will load the library as soon as the class that calls it is loaded into memory.
I am developing a multi-container app in docker. One of the services is a long-running console application in C# which basically does some polling on a database and sending the data to e server. I just keep the service running by adding this statement:
while(true);
Now I'm thinking of changing this service to a .NET Core worker service (or even windows service, since I am only using windows containers on windows hosts). I have read some articles about the pros of worker services, but they seem all quite obselete when it comes to a containerized application since my container anyway is running as kind of a "background service" (and i only use one service per container/image). So my question is:
Is there any benefit or drawback when running a core worker service in docker compared to running a console app in docker?
Update: With "worker service" I refer to the new worker service template available in .NET Core 3.x: https://www.stevejgordon.co.uk/what-are-dotnet-worker-services
In short, your happy path code will probably function "about the same".
However:
The benefit of going to the "generic host" is you get the benefit of the reusable components Microsoft has created for you......instead of rolling your own.
This means (IMHO) better code because you are not personally dealing with alot of the common issues with a long running process.
basically, you're getting a lot of plumbing code "for free" vs rolling your own implementation.
Pre 3.0/3.1 alot of this functionality was married into the asp.net namespaces. The 3.0/3.1 updates is alot of "get this into a common place for both asp.net and .net (non asp.net)" for use. Aka, demarry it from asp.net.
Setup: (a dedicated method "AddHostedService")
services.AddHostedService<MyWorkerThatImplementsIHostedService>();
So when a future developer looks at that above code, they know exactly what is going on. (vs figuring out the custom rolled implementation)
Or in a larger code example:
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureServices(services =>
{
services.AddHostedService<MyWorkerThatImplementsIHostedService>();
});
}
The above code ~~looks~~ asp.net'ish, but it is actually .net (non asp.net) code.
Aka, you're getting improved consistency.
Shut Down:
You get all the "shut down" options built in. And these are "graceful" shut down options... which unfortunately is not usually considered for "happy path" developers. If there is any reason to jump into this mini library...having some kind of GRACEFUL exit would be it. A hard-exit might leave your processing in an unknown hard-to-trouble-shoot state.
CNLT-C
Programmatically (see https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.hosting.ihostapplicationlifetime.stopapplication?view=dotnet-plat-ext-3.1 )
Kubernetes Shutdown
Microsoft has even thought out "can I delay the ultimate shutdown some"
See : How to apply HostOptions.ShutdownTimeout when configuring .NET Core Generic Host?
Here is a decent link that shows some options (Timer vs Queue vs Monitor)
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-3.1&tabs=visual-studio
You can also deploy your code as :
Container
Windows Service
** Linux Daemon (see https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.hosting.systemd.systemdhelpers.issystemdservice?view=dotnet-plat-ext-3.1 ) (this is usually a new concept to traditional dotnet framework developers)
Azure App Service
Oh yeah, Microsoft has even thought out "HealthChecks" for you.
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/health-checks?view=aspnetcore-5.0#separate-readiness-and-liveness-probes-1
Bottle line, use this instead of custom rolled stuff. It is well thought out.
::::::::::::::::::::Previous Comments:::::::::::::::::::::::::::::::::(below)
...............
Long-running console application in C#
Short version:
With modern code deployment strategies, it's not just technical decisions, it's financial decisions.
Longer version:
I've had this discussion recently, as some code bases have been earmarked for "convert to dotnet core" and "how do we convert our older windows services?".
Here are my thoughts:
Philosophically.
You have to think of "where I deploy and how much does that cost?", not just the technical problem. You mention docker. But how are you doing to deploy it? Kubernetes? (AKS in azure, other?) That's an important piece of information.
IMHO: With the "cloud" or even on premise Kubernetes.... you do NOT want a "Windows service" mentality, that will just run and run and run, running up costs constantly. Unless you have to have this.
You want to startup a process, let it run, and close it down as soon as possible.
Now, if you need it to run every hour, that's fine.
Now, if you need "instant" or "as soon as possible processing", (like watching for messages on a queue), then maybe you pay the price and have something that runs all of time, because processing those messages is more important than the price you pay for the running services.
So technically, I like the idea of
https://www.stevejgordon.co.uk/what-are-dotnet-worker-services
WHAT ARE WORKER SERVICES? Applications which do not require user
interaction. Use a host to maintain the lifetime of the console
application until the host is signalled to shut down. Turning a
console application into a long-running service. Include features
common to ASP.NET Core such and dependency injection, logging and
configuration. Perform periodic and long-running workloads.
But financially, I have to counter that with the cost of running on Azure (or even on premise).
Processing Message Queue messages means --> "yep, gotta run all the time". So I pay the price of having it run all the time.
If I know my client posts their import files in the middle of the night, one time, then I don't want to pay that price of always running. I want a console app that fires once in the morning. Get in, get out. Do the processing as quick as possible and get out. Kubernetes has scheduling mechanisms.
With Azure, it's not just technical decisions, it's financial decisions.
Not mentioned: if your code is scheduled to run every hour, but then starts taking longer than hour to run, you have different issues. Quartz.Net is one way to deal with these overlap issues.
Keep in mind, I had to really be firm about this argument about cost. Most developers just wanted to convert the windows-services to dotnet-core and be done with it. But that is not long term thinking as more code moves to the cloud and the cost of cloud operation come into play.
PS
Also, make sure you move all your code DOWN INTO A BUSINESS LAYER........and let any of these methods
Console.App (just regular one)
.NET Core worker service
Quartz.Net schedule job
Let the above items be "thin top layer that call your business logic layer", and then you don't paint yourself into a corner. The thinner you make that top layer, the better. Basically, my console-apps are
void Main(string args)
{
//wire up IoC
//pull out the business logic layer object from the Ioc
//call a single method on the business logic layer
}
and some appsettings.json files were Program.cs sits. Nothing or very little else. Push everything down to the business logic layer as soon as possible.
If you are always going to run in a container, then go with a console app. I see no inherent benefit of running as a service since containers, under proper orchestration such as Kubernetes, should be considered ephemeral. Also, you will have less friction in running your .NET Core 3.1.x application as a Linux or Windows container if you keep it simple i.e. console.
Also, I would use the following line in your console to ensure it plays nice with the allocated CPU for the container:
while(true)
{
Thread.Sleep(1 * 1000);
}
Sometimes my MVC 4 application starts very slowly, but all the following requests come up quickly. It's running on IIS 8 and it uses Forms authentication.
The first start-up might take 20 seconds or so. I'm not 100% sure how long does it take to get a slow start-up again, but I guess it's more than an hour.
It's the same issue as described here:
MVC slow if site has been idle
So checking out the Application Pool recycling thing, I stopped the application pool, started it again, then browsed to the address, but it still came up quickly. I then ran the Powershell command (Get-Process -Id ).StartTime on the IIS, and it told me that last recycling for this application pool was when I started it.
I suppose that exludes the pool recycling?
The project is using Devexpress MVC layout, and I have removed all the assemblies/references that I don't need, but I didn't notice much difference afterwards.
The other applications on this IIS are made with Web forms, and they always come up quickly. The other applications also don't have the Forms authentication.
As a workaround I'm about to make a service that opens the address every 30 minutes or so, but still would be interested to figure out the real cause.
Any ideas?
Its may happen sometimes if configuration is not made properly.
You can use following techniques for improving performance.
1. Enable compression;
2. Optimize caching;
3. Optimize CSS;
4. Optimize HTML;
5. Optimize images;
6. Optimize callback management;
7. Optimize data management.
Please find a link here for more details how can we improve performance of website while using devexpress.
Devexpress support describes a lot about each small points regarding performance.
https://www.devexpress.com/Support/Center/Question/Details/K18541
I've got serious problem with our application. We are developing GUI application + server which can be used in two different purposes.
GUI application invoking embedded server which runs in the same process as embedded server
GUI application is communicated with separated standalone server application via REST - separated process.
We are using spring.net so there are small differences between both solutions. Server is just one context so the solution #1 instantiates it directly as new spring.net context and solution #2 has two exe files: GUI.exe + standalone server exe. As I already said, both application flows are almost the same.
Whats the issue? Standalone server is three times slower than solution #1. It means separated standalone server application is three times slower than embedded one.
I've used DotTrace and find the find the reason in 10 minutes. Server uses NHibernate which get/set properties via reflection very often.
In the first solution when GUI application hosted in embedded server, reflection is very quick. But when it's on separate standalone server perform, reflections tasks are very slow.
Here are stack traces for slow solution:
- 5,874 ms System.RuntimeMethodHandle.PerformSecurityCheck(Object, IRuntimeMethodInfo, RuntimeType, UInt32)
- 4,642 ms System.Security.CodeAccessSecurityEngine.ReflectionTargetDemandHelper(Int32, PermissionSet)
- 36ms System.Security.CodeAccessSecurityEngine.CheckSetHelper(CompressedStack, PermissionSet, PermissionSet, PermissionSet, RuntimeMethodHandleInternal, RuntimeAssembly, SecurityAction)
- 1ms System.Reflection.RuntimeMethodInfo.get_Value
Fast solution:
- 5 ms • 10,740 calls • System.RuntimeMethodHandle.PerformSecurityCheck(Object, IRuntimeMethodInfo, RuntimeType, UInt32)
- 1 ms • 10,740 calls • System.Reflection.RuntimeMethodInfo.get_Value
As you can see, the slow solution killer is the aditional call to System.Security.CodeAccessSecurityEngine.ReflectionTargetDemandHelper. Standalone server should automatically run as full trusted as well as GUI does.
Do you have any solutions how to switch this check off or how to set up the standalone server application? When I compare both app.configs, i'm not able to find any difference regarding described issue.
EDIT:
We have finally investigated the reason and the solution was the only clear.
Standalone server instantiates spring.net's context with using ContextRegistry.GetContext() but embedded one uses standard new XmlApplicationContext(new[] {"..."}). This simple difference results into so significant performance hit.
It seems that spring's app.config context handler do "any wrong stuff" but we have not time to investigate the real purpose yet.
How are the embedded and standalone servers being created? Are they generated code or code that you've written? Have you verified that the standalone server is running under full trust? What framework is being used to handle the REST requests? I've used NHibernate in a similar manner before (client talking to web service and web service using NH) and never seen a 5-6 second delay per request because of CAS. Are you sure that you are caching your SessionFactory properly?
There doesn't seem to be many Windows Workflow Foundation gurus out there :(
Here are couple of challenges that I face:
How many workflow runtimes should there be running for in an Asp.Net MVC application? One per application, per session or per request?
How frequently should the workflow runtime be started and stopped? Once per application instance, once per session or once per request?
What are the pros and cons of doing one or another in the above options?
Any comments or suggestions are welcome,
Thanks,
Cullen
You would normally only run one workflow runtime per application. It is possible to define more than one and there may be some complex scenarios where that is desirable but its highly unlikely. I can't see any scenario where multiple runtimes for the same configuration would be run in the same process.
For a web hosted workflow you really need the SqlWorkflowPersistenceService. IIS expects to be able to recycle an application pool with minimul impact on the application. Hence you need idled workflows to be persisted so that they survive such recycles.
On a similar note you should use the ManualWorkflowSchedulerService which plays nice with ASP.NET use of threads, its also really handy in being able to perform end-to-end processing of a request to a response through workflow on a single thread. Just be sure to include the useActiveTimers="true" attribute so that delay activities work.
In line with the above you need to be sure that any active workflow does not take longer to complete or go idle than the application pool's shutdown time limit. Otherwise on recycle IIS may force the process to terminate before a workflow has persisted.
As to starting and stopping the workflow, its again difficult to see a scenario where you wouldn't just want it to start on application start and remain running. I guess if you have a workflow which never idles but just runs from beginning to end and you only run such workflows very occasionally then it might be simpler to start the runtime and the end it afterward. However even that can get messy, I wouldn't bother just start it on app start and be done with it.
How many workflow runtimes should there be running for in an Asp.Net MVC application?
one per application, unless you need more for scalability purposes (too many requests)
How frequently should the workflow runtime be started and stopped?
typically, once per application instance
The pro's and con's are trivial, you can scale better with more session requests and instances, but it takes more overhead to manage them all.
Your best bet is to use just enough of what you need and grow later if necessary.