I developed a program (in C# Winforms) and distributed it through a Google site I created.
I got a comment from someone that it doesn't work without the DEP disabled (he has Windows 7).
I read a little about the DEP thing and I understand that it blocks any program that tries to run with the RAM that suppose to save to the windows system...
Is this something I did when I developed the program? I made a setup project for the program so it creates a msi file. Is there is a way to prevent my program from running those forbidden pieces on the RAM (if i understand that correctly of course)?
the link to my site if it helps -
https://sites.google.com/site/chessopeningmaster/
All .NET programs, at least since .NET 2.0 but possibly before, declare themselves DEP compatible. That's done by a flag in the header of the executable. You can see it when you run dumpbin.exe /headers on the EXE from the Visual Studio Command Prompt:
...
2 subsystem (Windows GUI)
8540 DLL characteristics
Dynamic base
NX compatible // <=== here
No structured exception handler
Terminal Server Aware
100000 size of stack reserve
....
"NX" means Never eXecute, a data execution prevention mechanism implemented in hardware by the processor. The Wikipedia article about it is pretty good.
This is enforced by any modern version of Windows (XP SP2 and later) and any modern processor. You can safely assume that your program is in fact DEP compatible if it executes properly on your machine.
So this user probably saw your program crash, for whatever reason, and started tinkering with the tools available to him. Like turning DEP enforcement off. Technically it is possible that this stopped the crash. That however doesn't mean that the program is operating correctly. It is most certainly doesn't mean that you should turn this option off. Which is technically possible by running editbin.exe with the /nxcompat:no option.
If you want to pursue this then you should ask the user for a minidump of the crashed process.
Related
I have updated my:
Ubuntu server to 16.04.1 LTS and
MONO to v4.6.2
...from official repository.
Since the update, the websites are still running fine, but after about a day or two, some of the MONO processes go crazy and take 100% of the CPU. I have different websites; mostly plain HTML with just a little bit of code. It happens randomly, and on different websites each time. It's totally random.
I then receive an email alert of high CPU usage, connect via SSH, type "htop", and kill the process and it's back to normal ... for a day or two.
This definitely looks like a bug in this version of MONO. Any way to fix it? Anyone else had this problem? Or perhaps I should switch to a different version that doesn't have this corruption?
Thanks
Edit: After 2 days, EVERY MONO process is taking up the full CPU.
Looking into the Apache2 log file, I could find this related to MONO
WARNING: WebConfigurationManager's LRUcache evictions count reached its max size
Cache Size: 100 (overridable via MONO_ASPNET_WEBCONFIG_CACHESIZE)
Also, "service apache2 restart" does not solve the problem. I must manually kill the processes, or reboot.
After trying all options, it seems MONO just doesn't work well with Apache2 with mod_mono. The only solution I found is to switch Apache2 from prefork to worker mode, where the MONO server needs to be started manually and Apache2 simply forwards the requests to it -- and thus Apache2 doesn't directly touch MONO at all. There is very little documentation on how to do this, but since NGINX works in that mode, you can find instructions on how to set it up for NGINX and translate the app config file for Apache2.
These are good places to start
http://www.mono-project.com/docs/web/fastcgi/nginx/
http://epmjunkie.com/mono-fastcgi-startup-script/
I have played around with various MONO versions, and typing "service apache2 reload" to reproduce the high CPU usage problem.
In MONO 4.8, it seems to happen to happen a bit less often but the problem is still there.
In MONO 4.2.3.4, the problem is also there.
In MONO 4.2.1 that comes by default on Ubuntu, this problem doesn't happen.
As for .NET Core, some have tried it and highly recommended me to avoid it until it becomes more stable.
So for now, the only solution is to stick to MONO 4.2.1
This also confirms that this is related to MONO and not to my code or the server configuration.
I am currently using the Windows Error Reporting service that is built into the Windows OS. It falls short in quite a few areas including automating new builds being submitted for crash collection and analysis of the actual crashes.
I found a few options including Dr Dump and CrashRpt for C++ applications.
The most appealing option that I found though was HockeyApp. My team already uses HockeyApp for hosting mobile applications as well as builds of our desktop applications. Plus, it just seems more well-supported and feature-rich than other services. It seems to only support crash reports for .NET applications.
The application that I am trying to gather crash reports for is a mixed C++ and C# application though. I'm not sure if there is a crash reporting service out there that can handle both languages.
Without going into immense detail, my application is mostly .NET wrapped up in a native C++ application. I'm assuming that this means I need a service to support C++ crash reporting.
To summarize:
If my applications is mostly .NET based, but wrapped up in a native C++ appliaction, do I need a crash reporting service that just supports C++ applications?
Is there a crash reporting service that supports applications with mixed C++ and C# code?
Those tools have a disadvantage: they use the Unhandled Exception Handling mechanism and live in your process. If your application gets corrupted, you can no longer rely on things to work. I had that case in VB6, where the VB6 runtime was destroyed and no unhandled exception handler could help me.
That's also why WER (Windows Error Reporting) exists. While your process is dying, Windows itself still works pretty fine. And perhaps you can benefit from its features, too: the WER LocalDumps Registry Key
Set this Registry key during the installation of your product and it will save crash dumps into a folder. When your application starts up next time, you can do with them whatever you want (maybe ask for the user's permission):
analyze the crash dump (e.g. using ClrMd) and send some exception details to you
shrink the crash dump and send the shrunken one to you. (Convert a full memory .NET dump into a small minidump)
upon request ask the user to send the full crash dump to you
don't forget some clean up
All in all I would say it's not hard to do. Might be a good project for students. Pro: you don't need to care about all that crash handling, pointer mangling etc., because WER does it for you. You can't really mess it up.
Oh, and when you build that, consider building it as a reusable component and make it open source on Github. I would need that, too. You can't get a patent any more, because with this post I will claim "prior art" :-)
So this is a weird one.
I created a WPF application using MahApps for the GUI. So far my testing indicates that the app works fine on several different machines. Of course this is not the case on the client's machine.
The client makes use of Terminal Services and Windows Server 2008R2. Several users can be logged into their own version of the server at anytime. The app starts up fine once or twice, but after a day or so, it no longer opens up.
The app doesn't show up in the Application tab of Task Manager, but its process can be seen to be running in Processes Tab of Task Manager.
To be honest, I'm completely stumped. I had a look at the event manager log and couldn't find anything indicative of a problem. (Of course I might have missed something). I saw another SO question suggesting to disable hardware acceleration, but I'm not if that would help.
Any and all ideas would be greatly appreciated.
EDIT:
I thought I might mention the only thing that helps is if we restart the client machine.
EDIT:
I think I have isolated the issue to integration with Twain (should probably have mentioned that as another possible factor). I think the Twain library (unmanaged code) somehow stalls without sending back an error. Disabling it has "fixed" the issue.
This somehow relates to Twain and multi-session setups. I'm almost sure of it.
First you can analyze the wait chain in Windows Resource Monitor to check if there are any resources the process is waiting for. (You can find more information about the wait chain here or here.)
If you don't find any viable suspects there, you can create a memory dump of the hanging process and analyze the call stacks. If you don't know how to create one, you can read about it here. If you want to use Windows Task Manager and your OS is 64-bit then please be aware that you need to use the same bitness of Task Manager as the application.
That is: If your application is 64-bit then you have to use C:\Windows\System32\taskmgr.exe and if it's 32-bit you have to use C:\Windows\SysWOW64\taskmgr.exe. If you forget this important step you'll just get an unusable dump full of gibberish.
After you got the memory dump you can either load it into WinDbg (using the same bitness as the application) or Visual Studio (best to use 2015 or later) and analyze the call stacks of all running threads.
You can download WinDbg here and read about the necessary WinDbg configuration here. For the list of all threads you need to use this SOS command.
If you need help in loading memory dumps into Visual Studio you can find more information here.
After you've looked at the call stacks you most definitely find the answer what is waiting on what resources and is thus preventing the shutdown or startup of the application. It can either be a classic deadlock or an external resource like writing/reading of a file or some other waiting without a timeout like accessing a database or an URL that can't be reached at the moment. And of course it can also be just an infinite loop - if it doesn't consume much CPU then perhaps with some kind of DoEvents in between.
And last but very not least: If you are really interested what can be analyzed if an application hangs you can read about an example analysis done by the absolutely awesome great Mark Russinovich here.
I would like to be able to do an "inplace" update with my program. Basically, I want to be able to login remotely where the software is deployed, install it while other users are still using it (in a thin client way), and it update their program.
Is this possible without too much of a hassle? I've looked into clickonce technology, but I don't think that's really what I'm looking for.
What about the way firefox does it's updates? Just waits for you to restart the program, and notifies you when it's been updated.
UPDATE: I'm not remoting into the users' PC. This program is ran on a server, and I remote in and update it, the users run it directly off the server through remote access.
ClickOnce won't work because it requires a webserver.
I had some example code that I can't find right now but you can do something similar to Firefox with the System.Deployment.Application namespace.
If you use the ApplicationDeployment class, you should be able to do what you want.
From MSDN, this class...
Supports updates of the current deployment programmatically, and handles on-demand downloading of files.
Consider the MS APIs with BITS, just using bitsadmin.exe in a script or the Windows Update Services.
Some questions:
Are the users running the software locally, but the files are located on a networked share on your server?
Are they remoting into the same server you want to remote into, and execute it there?
If 2. are they executing the files where they are placed on the server, or are they copying them down to a "private folder"?
If you cannot change the location of the files, and everyone is remoting in, and everyone is executing the files in-place, then you have a problem. As long as even 1 user is running the program, the files will be locked. You can only update the files once everyone is out.
If, on the other hand, the users are able to run their own private copy of the files, then I would set up a system where you have a central folder with the latest version of the files, and when a user starts his program, it checks if the central folder has newer versions than the user is about to execute. If it does, copy the new version down first.
Or, if that will take too long, and the user will get impatient (what, huh, users getting impatient?), then having the program check the versions after startup, and remind the user to exit would work instead. In this case, the program would set a flag that upon next startup would do the copying, only now the user is aware of it happening.
The copying part would easily be handled by either having a separate executable that does the actual copying, and executing that instead, or the program could copy itself temporarily to another location and run that copy with parameters that says "update the original files".
While you can design your code to modify itself (maybe not in C#?), this is generally a bad idea. This means that you must restart something to get the update. (In Linux you are able to replace files that are in use, however an update does not happen until the new data is loaded into memory i.e. application restart)
The strategy used by Firefox (never actually looked into it) is storing the updated executable in a different file which is checked for when program starts to load. This allows the program to overwrite the program with the update before the resource is locked by the OS. You can also design you program more modular so that portions of it can be "restarted" without requiring a restart of the entire program.
How you actually do this is probably provided by the links given by others.
Edit:: In light of a response given to Lasse V. Karlsen
You can have your main program looking for the latest version of the program to load (This program wouldn't be able to get updates without everyone out). You then can remove older versions once people are no longer using it. Depending on how frequent people restart their program you may end up with a number of older programs versions.
ClickOnce and Silverlight (Out of browser) both support your scenario, if we talk about upgrades. Remote login to your users machine? Nope. And no, Firefox doesn't do that either as far as I can tell..
Please double-check both methods and add them to your question, explaining why they might not do what you need. Otherwise it's hard to move on and suggest better alternatives.
Edit: This "I just updated, please restart" thing you seem to like is one method call for Silverlight applications running outside of the browser. At this point I'm fairly certain that this might be the way to go for you.
ClickOnce doesn't require a webserver, it will let you publish updates while users are running the software. You can code your app to check for new update every few minutes and prompt the user to restart the app if a new version is found which will then take them through the upgrade process.
Another option is a Silverlight OOB application, but this would be more work if your app is already built as WinForms/WPF client app.
Various deployment/update scenarios (for .NET applications) are discussed with there pros and cons in Microsoft's Smart Client Architecture and Design Guide. Though a little bit old I find that most still holds today, as it is describing rather the basic architectural principles than technical details. There is a PDF version, but you find it online as well:
Deploying and Updating Smart Client Applications
Is this possible without too much of a hassle?
Considering the concurrency issues with thin clients and the complexity of Windows installations, yes hot updates will be a hassel without doing it the way the system demands.
I'm looking for a few talking points I could use to convince coworkers that it's NOT OK to run a 24/7 production application by simply opening Visual Studio and running the app in debug mode.
What's different about running a compiled console application vs. running that same app in debug mode?
Are there ever times when you would use the debugger in a live setting? (live: meaning connected to customer facing databases)
Am I wrong in assuming that it's always a bad idea to run a live configuration via the debugger?
You will suffer from reduced performance when running under the debugger (not to mention the complexity concerns mentioned by Bruce), and there is nothing to keep you from getting the same functionality as running under the debugger when compiled in release mode -- you can always set your program up to log unhandled exceptions and generate a core dump that will allow you to debug issues even after restarting your app.
In addition, it sounds just plain wrong to be manually managing an app that needs 24/7 availability. You should be using scheduled tasks or some sort of automated process restarting mechanism.
Stepping back a bit, this question may provide some guidance on influencing your team.
Just in itself there's no issue in running it in debugging if the performance is good enough. What strikes me as odd is that you are running business critical 24/7 applications as users, perhaps even on a workstation. If you want to ensure robustness and avaliability you should consider running this on dedicated hardware that no one uses besides the application. If you are indeed running this on a users machine, accidents can be easily made, such as closing down the "wrong" visual studio, or crashing the computer etc.
Running in debug should be done in the test environment. Where I've work/worked we usually have three environments, Production, Release and Test.
Production
Dedicated hardware
Limited access, usually only the main developers/technology
Version control, a certain tagged version from SVN/CVS
Runs the latest stable version that has been promoted to production status
Release
Dedicate hardware
Full access to all developers
Version control, a certain tagged version from SVN/CVS
Runs the next version of the product, not yet promoted to production status, but will probably be. "Gold" if you like.
Test
Virtual machine or louse hardware
Full access
No version control, could be the next, next version, or just a custom build that someone wanted to test out on "near prod environment"
This way we can easily test new version in Release, even debug them there. In Test environment it's anything-goes. It's more if someone want to test something involving more than one box (your own).
This way it will protect you against quick-hacks that wasn't tested enough by having dedicated test machines, but still allow you to release those hacks in an emergency.
Speaking very generically, when you run a program under a debugger you're actually running two processes - the target and the debugger - and tying them together pretty intimately. So the opportunities for unexpected influences and errors (that aren't in a production run) exist. Of course, the folks who write the debuggers do their best to minimize these effects, but running that scenario 24/7 is likely to expose any issues that do exist.
If you're trying to track down a particular failure, sometimes running under a debugger is the best solution; but even there, often enabling tracing of one sort or another is a lower-impact solution that is just as effective.
The debugger is also using up resources - depending on the machine and the app, that could be an issue. If you need more specific examples of things that could go wrong using a debugger 24/7 let me know.
Ask them if they'd like to be publicly mocked on The Daily WTF. (Because with enough details in the write up, this would qualify.)
I can't speak for everyone's experience, but for me Visual Studio crashes a lot. It not only crashes itself, but it crashes explorer. This is exacerbated by add-ons and plugins. I'm not sure if its ever been tested to run for 24/7 over days and days and days the same way the OS has.
Your essentially putting the running of your app at the mercy of this huge behemoth of a second app that sounds like its easily orders-of-magnitude larger and more complex than your app. Youre just going to get bug reports and most of them are going to involve visual studio crashing.
Also, are you paying for visual studio licenses for production machines?
You definitely don't want an application that needs to be up 24/7 to be run manually from the debugger, regardless of the performance issues. If you have to convince your co-workers of that, find a new job.
I have sometimes used the debugger live (i.e. against live customer data) to debug data-related application problems in situations where I couldn't exactly reproduce the production data in a test environment.
Simple answer: you will almost certainly reduce performance (most likely considerably) and you will vastly increase your dependencies. In one step you've added the entire VS stack including the IDE and every other little bit to your dependencies. Smart people keep the dependencies of high-uptime services as tight as possible.
If you want to run under a debugger then you should use a lighter weight debugger like ntsd, this is just madness.
We never run it via the debugger. There are compiler options which may accidentally be turned on/off. Optimizations aren't turned on, and running it in production is a huge security risk.
Aside from the debug code possibly having different code paths (#ifdef, Debug.Assert(), etc) code-wise it will run the same.
A little scary mind you - set breakpoints, set the next line of code you want to execute, interactive exceptions popup and the not-as-stable running under visual studio.There are also debugger options that allow you to break always when an exception occurs. Even inspecting classes can cause side-effects if you haven't written code properly... It sure isn't something i'd want to do as the normal 24x7 process.
The only reason to run from the debugger is to debug the application. If you're doing that on a regular basis in production, it's a big red flag that your code and your process need help.
To date I've never had to run debug mode interactively in production. The rare time we switched over to a debug version for extra logging, but never sat there with visual studio open.
I would ask them what is the advantage of running it via Visual Studio?
There are plenty of disadvantages that have been listed in the replies. I can't think of any advantages.