My code tries to compile a big XSLT2 transform (not complex, just lots of simple - about 24,000 lines from MapForce) in C# (.NET 4.5 on 64-bit Win7) in Saxon HE (9.5 latest).
When I run this from a console application, it works fine (albeit slow). Executing the transform takes 200-300ms and I get the output I'm expecting.
When I run the same code wrapped in a WCF service in IIS (7.5), or as a http handler in IIS, I get a StackOverflowException shortly after executing the compile command (the next line is never executed).
If I try with a small transform, my code works in IIS.
The event and IIS logs don't show anything that appears useful.
Other than building Saxon from source (apparently a bit hard - any pointers welcome) to see if that helps, does anyone have any ideas where to start with this one?
After much fiddling, it turns out that the IIS worker process has, by default, a much smaller stack than a standalone application does, and this was the cause of the problem. You can modify the .exe to change this, but it was simpler for us to create a new thread and specify the thread's stack size on creation. Problem solved instantly. One to remember!
Related
I have updated my:
Ubuntu server to 16.04.1 LTS and
MONO to v4.6.2
...from official repository.
Since the update, the websites are still running fine, but after about a day or two, some of the MONO processes go crazy and take 100% of the CPU. I have different websites; mostly plain HTML with just a little bit of code. It happens randomly, and on different websites each time. It's totally random.
I then receive an email alert of high CPU usage, connect via SSH, type "htop", and kill the process and it's back to normal ... for a day or two.
This definitely looks like a bug in this version of MONO. Any way to fix it? Anyone else had this problem? Or perhaps I should switch to a different version that doesn't have this corruption?
Thanks
Edit: After 2 days, EVERY MONO process is taking up the full CPU.
Looking into the Apache2 log file, I could find this related to MONO
WARNING: WebConfigurationManager's LRUcache evictions count reached its max size
Cache Size: 100 (overridable via MONO_ASPNET_WEBCONFIG_CACHESIZE)
Also, "service apache2 restart" does not solve the problem. I must manually kill the processes, or reboot.
After trying all options, it seems MONO just doesn't work well with Apache2 with mod_mono. The only solution I found is to switch Apache2 from prefork to worker mode, where the MONO server needs to be started manually and Apache2 simply forwards the requests to it -- and thus Apache2 doesn't directly touch MONO at all. There is very little documentation on how to do this, but since NGINX works in that mode, you can find instructions on how to set it up for NGINX and translate the app config file for Apache2.
These are good places to start
http://www.mono-project.com/docs/web/fastcgi/nginx/
http://epmjunkie.com/mono-fastcgi-startup-script/
I have played around with various MONO versions, and typing "service apache2 reload" to reproduce the high CPU usage problem.
In MONO 4.8, it seems to happen to happen a bit less often but the problem is still there.
In MONO 4.2.3.4, the problem is also there.
In MONO 4.2.1 that comes by default on Ubuntu, this problem doesn't happen.
As for .NET Core, some have tried it and highly recommended me to avoid it until it becomes more stable.
So for now, the only solution is to stick to MONO 4.2.1
This also confirms that this is related to MONO and not to my code or the server configuration.
So this is a weird one.
I created a WPF application using MahApps for the GUI. So far my testing indicates that the app works fine on several different machines. Of course this is not the case on the client's machine.
The client makes use of Terminal Services and Windows Server 2008R2. Several users can be logged into their own version of the server at anytime. The app starts up fine once or twice, but after a day or so, it no longer opens up.
The app doesn't show up in the Application tab of Task Manager, but its process can be seen to be running in Processes Tab of Task Manager.
To be honest, I'm completely stumped. I had a look at the event manager log and couldn't find anything indicative of a problem. (Of course I might have missed something). I saw another SO question suggesting to disable hardware acceleration, but I'm not if that would help.
Any and all ideas would be greatly appreciated.
EDIT:
I thought I might mention the only thing that helps is if we restart the client machine.
EDIT:
I think I have isolated the issue to integration with Twain (should probably have mentioned that as another possible factor). I think the Twain library (unmanaged code) somehow stalls without sending back an error. Disabling it has "fixed" the issue.
This somehow relates to Twain and multi-session setups. I'm almost sure of it.
First you can analyze the wait chain in Windows Resource Monitor to check if there are any resources the process is waiting for. (You can find more information about the wait chain here or here.)
If you don't find any viable suspects there, you can create a memory dump of the hanging process and analyze the call stacks. If you don't know how to create one, you can read about it here. If you want to use Windows Task Manager and your OS is 64-bit then please be aware that you need to use the same bitness of Task Manager as the application.
That is: If your application is 64-bit then you have to use C:\Windows\System32\taskmgr.exe and if it's 32-bit you have to use C:\Windows\SysWOW64\taskmgr.exe. If you forget this important step you'll just get an unusable dump full of gibberish.
After you got the memory dump you can either load it into WinDbg (using the same bitness as the application) or Visual Studio (best to use 2015 or later) and analyze the call stacks of all running threads.
You can download WinDbg here and read about the necessary WinDbg configuration here. For the list of all threads you need to use this SOS command.
If you need help in loading memory dumps into Visual Studio you can find more information here.
After you've looked at the call stacks you most definitely find the answer what is waiting on what resources and is thus preventing the shutdown or startup of the application. It can either be a classic deadlock or an external resource like writing/reading of a file or some other waiting without a timeout like accessing a database or an URL that can't be reached at the moment. And of course it can also be just an infinite loop - if it doesn't consume much CPU then perhaps with some kind of DoEvents in between.
And last but very not least: If you are really interested what can be analyzed if an application hangs you can read about an example analysis done by the absolutely awesome great Mark Russinovich here.
I have a C# .NET 4.6 console application that is supposed to run continuously (over days/months). However, after a non deterministic duration, all running threads will freeze for no apparent reason (CPU usage is at 0%, memory is not particularly high), and trying to attach the application to an instance of Visual Studio 2015 for debugging will fail (pressing "pause" will cause Visual Studio to stop responding!).
I inspected the parallel stack traces (captured via a dump in the process explorer) and could not find any sign of a deadlock (which would otherwise be the obvious culprit).
Here are for example 2 parallel stacks that are frozen (not even in my code but in the DirectoryInfo.cs core library, and ServiceStack OrmLite library), even though there are absolutely no reasons for them to be stuck like this.
I have previously noticed this behavior of freezing on other parts of code so it really seems these libraries are "victims" of the freeze and not responsible for it. Even if there were a deadlock which I could not see, it should not prevent these threads from completing as they are not waiting for anything.
Finally, killing the process and restarting it will always allow the previously frozen operations to run successfully.
Do you have any clue on what could be causing this kind of weird behavior/have any advice on tools to be used to get more insight?
Seems to be both threads are hanging while reading information(execute reader reads data from file, Enumerator trying to read data from file system for directory information). Is the file system is accessible at that point of time? Are you able to access the directory in which the reading is happening ?
The application in question is written in C#. We are late in the development cycle, close to launch on our application. One of my coworkers is seeing the following issue:
When he logs out of his Windows 7 session while the application is running, he gets a "csc.exe - Application Error" popup window that says "The application was unable to start correctly (0xc0000142). Click OK to close the application."
I believe that I have tracked this down to the fact that we update the application's XML config file on exit, and the code uses XmlSerializer. According to this question, XmlSerializer launches csc.exe to compile serialization assemblies dynamically on an as-needed basis, at run time. My suspicion is bolstered by the fact that, if I remove the update to the config file at exit time, then my coworker no longer sees the error message in question.
Can someone explain to me in more detail what is happening here? Why does csc.exe fail to start properly when executed at system logout? Is there some low-risk solution that I can put in place to mitigate the problem?
Things I have considered:
Use sgen to generate the serialization assemblies and deploy them with the application. This sounds promising, but my experiments with it were pretty dismal. It seems to only be able to generate a DLL either for an entire assembly or for a single class, no way to specify a list of classes. Also, when I point it to one of my assemblies, it starts complaining about classes in the assembly with duplicate names.
Use another means to read / write the XML. I'm not confident about implementing this at our current stage of development. We are hoping to launch soon, and this feels like too much of a risk.
We ran into strange sql / linq behaviour today:
We used to use a web application to perform some intensive database actions on our system. Recently we moved to a winforms interface for various reasons.
We found out that performance has seriously decreased: an action that used to take about 15 minutes now takes as long as one whole hour. The strange thing is that It's the exact same method being called. The method performs quite a bit of read / write using linq2sql, and profiling on the client machine showed that the problematic section is on the SQL action itself, in the linq's "Save" method.
The only difference between the cases is that on one case the method is called from a web application's code behind (MVC in this case), and on the other from a windows form.
The one idea I could come up with is that SQL performance has something to do with the identity of the user accessing the db, but I could not find any support for that assumption.
Any ideas?
Did you run both tests from the same machine? If not hardware differences could be the issue... or network... one could be in a higher speed section of your network... like in the same vlan as the sql server. Try running the client code on the same server the web app was running on.
Also if your app is updating progress in a sycronous manner the app could be waiting a long time for display to update... as apposed to working with a stream ala response.write.
If you are actually outputting progress as you go you should make sure that the progress updates are events and that the display of those happens on another thread so that the processing isn't waiting on display. Actually you probably should put the processing on its own thread... and just have an event handler take care of the updates... that is a whole different discussion. The point is that your app could be waiting to update the display of progress.
It's a very old issue but I happened to run into the question just now. So for whom is may concern nowadays, the solution (and there-before the problem) was frustratingly silly. Linq2SQL was configured on the dev machines to constantly write a log to console.
This was causing a huge delay due to the simple act of outputing large amount of text to the console. On the web server the log was not being written, and therefore - no performance drawback. There was a colossal face-palming once we figured this one out. Thanks for the helpers, I hope this answer will help someone solve it faster next time.
Unattended logging. That was the problem.