I've got serious problem with our application. We are developing GUI application + server which can be used in two different purposes.
GUI application invoking embedded server which runs in the same process as embedded server
GUI application is communicated with separated standalone server application via REST - separated process.
We are using spring.net so there are small differences between both solutions. Server is just one context so the solution #1 instantiates it directly as new spring.net context and solution #2 has two exe files: GUI.exe + standalone server exe. As I already said, both application flows are almost the same.
Whats the issue? Standalone server is three times slower than solution #1. It means separated standalone server application is three times slower than embedded one.
I've used DotTrace and find the find the reason in 10 minutes. Server uses NHibernate which get/set properties via reflection very often.
In the first solution when GUI application hosted in embedded server, reflection is very quick. But when it's on separate standalone server perform, reflections tasks are very slow.
Here are stack traces for slow solution:
- 5,874 ms System.RuntimeMethodHandle.PerformSecurityCheck(Object, IRuntimeMethodInfo, RuntimeType, UInt32)
- 4,642 ms System.Security.CodeAccessSecurityEngine.ReflectionTargetDemandHelper(Int32, PermissionSet)
- 36ms System.Security.CodeAccessSecurityEngine.CheckSetHelper(CompressedStack, PermissionSet, PermissionSet, PermissionSet, RuntimeMethodHandleInternal, RuntimeAssembly, SecurityAction)
- 1ms System.Reflection.RuntimeMethodInfo.get_Value
Fast solution:
- 5 ms • 10,740 calls • System.RuntimeMethodHandle.PerformSecurityCheck(Object, IRuntimeMethodInfo, RuntimeType, UInt32)
- 1 ms • 10,740 calls • System.Reflection.RuntimeMethodInfo.get_Value
As you can see, the slow solution killer is the aditional call to System.Security.CodeAccessSecurityEngine.ReflectionTargetDemandHelper. Standalone server should automatically run as full trusted as well as GUI does.
Do you have any solutions how to switch this check off or how to set up the standalone server application? When I compare both app.configs, i'm not able to find any difference regarding described issue.
EDIT:
We have finally investigated the reason and the solution was the only clear.
Standalone server instantiates spring.net's context with using ContextRegistry.GetContext() but embedded one uses standard new XmlApplicationContext(new[] {"..."}). This simple difference results into so significant performance hit.
It seems that spring's app.config context handler do "any wrong stuff" but we have not time to investigate the real purpose yet.
How are the embedded and standalone servers being created? Are they generated code or code that you've written? Have you verified that the standalone server is running under full trust? What framework is being used to handle the REST requests? I've used NHibernate in a similar manner before (client talking to web service and web service using NH) and never seen a 5-6 second delay per request because of CAS. Are you sure that you are caching your SessionFactory properly?
Related
We run a Windows Forms application developed in C# in our company, and one problem is giving us headaches.
When we run the application from a local machine, in drive C:, for example, the application loads and runs fast. It's heavily database-based, which means it does a lot of queries to our MSSQL server, and it runs all queries in less than 1 second, while running from a local drive.
If we run the same application from a mapped network drive (not a UNC path, a M: mapped drive), it loads fast, but the queries takes ages to complete, and hardly we can see the result.
ClickOnce is not an option for us (due to reasons that are not subject to discussion here), and we have several other 3rd party applications that runs fast, loaded from the same mapped M: drive.
I did some research, and the closest question I could find is this one:
http://stackoverflow.duapp.com/questions/2554716/my-c-net-application-is-running-slower-when-the-exe-is-located-on-the-network
When I right-click the application there's no "unblock" option available, which tells me that there's no secondary stream attached to the file and it's "trusted" by the machine.
Also, I tried adding <loadFromRemoteSources enabled="true"/> in the .config file, but it caused no changes in the application performance so far.
The application is not signed, and the slowness happens with both debug and release versions of the application.
What are we doing wrong ?
PS: I'm still trying to pinpoint the exact command that's taking longer to work, but no luck so far.
EDIT: Adding new information. It seems that the problem wasn't the network "per se", but the fact that the application was doing a background task and failing because it was running from the network. This failure wasn't wrapped around a try-catch block, and was preventing the background task to return properly, creating a major delay on the application response.
That means it was our development bug, not Windows fault. Thanks for the answers, I'll vote to close this question.
I have recently found one scenario where exactly this was happening in .net winforms sql-server application.
On one machine, the application was lightning-fast, on another one, queries took seconds.
Second machine was configured to use VPN dialed via PPTP. The VPN was automatically reconnecting whenever the computer got online – even if the machine was in company network (where no VPN was needed). VPN auto-redial trick always seemed to be very useful... until I found that connection to the SQL server basically always went through the VPN because of this. Manually disconnecting the VPN instantly helped: responses got fast again.
I do not say this is definite solution in your case but this is one of things what causes almost unacceptable slowness of queries. I observed this first hand.
I wrote a console application that is currently running on the server. It doesn't require any user input (other than parameters at start which can be done via start parameter).
Unfortunately this solution is bad, because someone can accidentally turn it off (i.e. when connecting to server using Remote Desktop Connection and then logging off instead simply disconnecting). I need it to run all the time.
One solution would be to turn it into windows service, but so far using SC or third-party tools like nssm or RunAsService failed (SC and Nssm create a service but such service cannot be started).
I could completely rewrite my program to be a proper service... but to be honest I'm struggling with it (and from what I've read its not recommended practice).
Finally I could leave it as a console app and use task scheduler to run it -which does look like a decent solution, but (like I've mentioned) I need it to run all the time (it can be turned off and on - very short downtimes are not an issue).
Could I please ask for any help with setting such task?
SOLVED
After few attempts I've turned it into service using Topshelf andthis great guide.
There are two methods you can use to run a .net program constantly in windows. Both have advantages and disadvantages.
Windows Service
Recommended Solution
Will startup service on computer start (doesn't require someone to log on)
Has some (limited) error handling in the form of restarts
Good for very reliable services that can run for long periods of time
Service handles its own state
Can easily crash due to memory leaks
IIS Application Server
Not recommended solution
Starts with windows, but might not start your application
Requires newer windows to allow always on configuration
Always on configuration is complicated
State is handled by IIS
Much better resiliency to crappy programming, as IIS will restart for you
IIS will also likely kill your threads for you (so your scheduler will stop working)
I suspect the reason you were told that a windows service is not recommended was due to the fact that it could crash due to memory leaks. But that issue will occur no matter what, since your program needs to run for a long time (its not a problem with windows services, but long lived processes).
There are a number of rules that need to be followed to write a functional windows service including but not limited to
the ability to complete the initialization process in a specific time
a general understanding of threads
There is nothing inherently bad about writing a windows service they just require more effort and an installer.
Based on your description a scheduled job seems to fit your requirements
If you don't want to re-write your console app into a windows service and want it to be running all the time, the only solution I could see is:
Create a small window's service, that checks to see if your console process is running or not.
If it finds that there is no Console process, then start a new one.
Process[] pname = Process.GetProcessesByName("YourConsoleApp.exe");
if (pname.Length == 0)
Process.Start("YourConsole.exe")
else
//Do nothing
We ran into strange sql / linq behaviour today:
We used to use a web application to perform some intensive database actions on our system. Recently we moved to a winforms interface for various reasons.
We found out that performance has seriously decreased: an action that used to take about 15 minutes now takes as long as one whole hour. The strange thing is that It's the exact same method being called. The method performs quite a bit of read / write using linq2sql, and profiling on the client machine showed that the problematic section is on the SQL action itself, in the linq's "Save" method.
The only difference between the cases is that on one case the method is called from a web application's code behind (MVC in this case), and on the other from a windows form.
The one idea I could come up with is that SQL performance has something to do with the identity of the user accessing the db, but I could not find any support for that assumption.
Any ideas?
Did you run both tests from the same machine? If not hardware differences could be the issue... or network... one could be in a higher speed section of your network... like in the same vlan as the sql server. Try running the client code on the same server the web app was running on.
Also if your app is updating progress in a sycronous manner the app could be waiting a long time for display to update... as apposed to working with a stream ala response.write.
If you are actually outputting progress as you go you should make sure that the progress updates are events and that the display of those happens on another thread so that the processing isn't waiting on display. Actually you probably should put the processing on its own thread... and just have an event handler take care of the updates... that is a whole different discussion. The point is that your app could be waiting to update the display of progress.
It's a very old issue but I happened to run into the question just now. So for whom is may concern nowadays, the solution (and there-before the problem) was frustratingly silly. Linq2SQL was configured on the dev machines to constantly write a log to console.
This was causing a huge delay due to the simple act of outputing large amount of text to the console. On the web server the log was not being written, and therefore - no performance drawback. There was a colossal face-palming once we figured this one out. Thanks for the helpers, I hope this answer will help someone solve it faster next time.
Unattended logging. That was the problem.
I've developed a .NET based Windows service that uses part managed (C#) and unmanaged code (C/C++ libraries).
In some domain environments (e.g. Win 2k3 32bit server inside domain abc.com) sometimes the service takes more than 30 seconds to start (especially on OS restart), thus failing to start the service. I suspect that it has something to do with enterprise level security but I do not know for sure.
http://msdn.microsoft.com/en-us/library/aa720255%28VS.71%29.aspx
I've tried the following without success:
- delay loading references by moving the using directives as far as possible from the servicebase implementation (especially the xml namespace - know to cause delays in loading)
- delay loading and configuring log4net
- precompiling the code by using ngen
- delaying the start of the worker thread
- add/remove manifest + decencies set inside
- sign/unsign the binaries
- use the configuration settings (there are a lot of settings and the scope level for all is set to application ) as later as possible
- add all dependencies to GAC
I didn't tried yet to add security demands for the class that has the Main method implemented.
I didn't tries to implement my own configuration loader because after inspecting the autogenerated code, I've noticed that the setting class is a singletone and it gets its instance on call.
By completely removing the log4net dependency it worked, but this is not an option.
When the network card is disabled the service starts immediately.
You'd normally use SysInternals' Process Monitor to diagnose this problem. The fact that this is a service complicates matters. Check this blog post for a similar troubleshooting session.
It quacks like a CRL (Certification Revocation List) problem btw. To disable it: Control Panel, Internet Options, Advanced tab, Security, untick "Check for publisher's certificate revocation".
We discovered that using a log4net UDP appender with a name resolution (even to 12.0.0.1) was causing a massive slow down in start up.
We have a 2 x Quad Core Xeon server with 8GB of RAM and Windows Server 2003 Enterprise installed on it. We installed our application server which is based on .NET Framework 3.5 on it. The server uses SQL Server 2005 as its database server.
When we installed the application server, it used to have ultra fast performance and everything was fine. Once we joined it into our domain, its performance decreased dramatically. For example a task that took 1 sec to complete, now takes about 30 sec. This is very strange since only .NET based applications' performance got this performance hit but the other applications still run at their normal speed.
Does anyone have any idea about why is this happening? Any help or suggestion is much appreciated.
Unfortunately, more is probably needed to answer your question. There are a host of possible reasons why this is occurring, and most of them involve your code.
Based on the symptom that you joined the domain and then things started causing trouble, I'd say you've got a lot of networking that you're doing that previously was able to be done locally on your machine and the latency is now actually causing trouble.
But that's a wild guess based on not nearly enough information.
I'd suggest you profile your code. Find out where the majority of your time is spent during execution and then post the code or a sanitized version of it here so we can help you optimize it.
I did find the answer to my question so i thought it might be good to share it here. The CLR want generate publisher evidence for assemblies with authenticode signature when it tries to load the assemblies. In our case CLR was trying to connect to clr.microsoft.com but our server's internet access was blocked so it caused huge delay whenever the application server tries to load a new assembly.
The following post describes how you can disable this feature:
Bypassing the Authenticode Signature Check on Startup
I'm going to make a guess here and think that you're talking about a web application. If this is correct, you might want to take a look at the application pools you have setup on the webserver. Your application might be getting confused about which pool to set itself in when it starts running.
Another thing to check might be your data connections and make sure that you're closing everything that's been opened.
The last thing, like Randolpho said, you're just really going to have to follow your code execution with some kind of profiler and see where things are getting tied up.
Good luck!