I'm writing a small utility to update our application.
In order to update the update utility, I would like it to rename itself while running and copy the new version from a remote source. So the next time you start the updater, you have a new version.
Do you know of any possible problems which could occur, using that mechanismn?
Actually I was surprised it is at all possible to rename a running program (lost a cake there...), while deleting is not allowed.
Kind regards for any hints
using Win XP, .NET 3.5
You can rename - because it alters metadata only, but the actual file allocation chain is unmodified, which means they can stay memory-mapped in the process(es) that use it.
This is an ubiquitous trick in installers, when they have to upgrade 'live' running binaries.
It can cause trouble if the application tries to later reopen from the original filespecification. This is not something that regularly happens with executables or dlls, though you should be aware of embedded resources and programs that may do some self-certification (license checks). It's usually best to restart the corresponding application sooner than rather later, much like windows will urge you to reboot on system updates
Renaming an .exe is usually possible without any problems - renaming .dll's is quite another story.
I'd suggest using subdirectories instead (labeled with the date or version number) and creating a small launcher application (with the same name and icon as your "real" application) that reads the current version from a text file and launches it.
i.e.
updater.exe (the launcher)
updater.config (containing /updater_v_02/updater.exe)
/updater_v_01/updater.exe (the real app, v 01)
/updater_v_02/updater.exe (the real app, v 02)
This way, you can
keep several versions of your application around
test a new version (by directly launching it from the subdir) while your users continue using the old version
switch DLLs etc. without any hassle
Related
I'm currently running some computationally intensive simulations, but they are taking a long time to complete. I've already split the workload across all the available physical cores in my processor. What I'm wondering is how to go about splitting the workload further and assigning it to other computers. I'm contemplating buying a couple Xeon servers and using them for the number crunching.
The one big issue I have is that I'm currently running the program within Visual Studio (Ctrl F5) as there are two methods which I'm constantly making small changes to.
Any suggestions on how/if it's possible to assign the workload to other computers / if it's possible to still run the program with VS or would I need to create an *.exe each time I wanted to run it?
It depends on the problem you're solving.
You can use map/reduce and Hadoop if it's easily parallelizable, like SETI#Home.
You can use something like MPI if it's not, like linear algebra.
Isn't the crux of your problem in this statement "The one big issue i have is that im currently running the program within Visual Studio (Ctrl F5) as there are two methods which im constantly making small changes to."?
Is it the "one big issue" because if you distribute then you can't afford modifying the code on all of the nodes when doing the job so you think about something distributing it for you? If this is the case then I assume that you already know how to split the algo or data in a way that nodes can take take of small parts of the job.
If it's the case - sorry if I misunderstood - then externalise the part that you are "constantly making small changes to" into a file or a DataBase encoded in some simple or more elaborate form depending on what you are changing so you don't need to have your nodes change constantly. Deploy the code on all nodes, connect them to the DB or file which contains the varying bit and enjoy your new Ferrari!
You could use the WMI service to start your process on the remote computers. You would build your exe to a shared directory that is visible to the remote computer, then use WMI on the remote computer to launch the exe.
There are plenty of examples out there to do this, but to illustrate, a simple method which assumes no authentication complications is to use a .VBS script file:
strComputer = "acomputer"
strCommandLine = "calc.exe"
Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")
Set objProcess = objWMIService.Get("Win32_Process")
intReturnValue = objProcess.Create(strCommandLine, , , intPID)
WScript.Echo "Process ID: " & intPID
You can also use PsExec from SysInternals to handle all the details of making this work.
After building the exe in Visual Studio, you could run it on your local machine to ensure it does what you want, then when you are ready to launch it on the remote systems, you can execute a batch script similar to the above VBS to launch the exe on the remote systems.
You will still need to provide some mechanism to divide up the workload so that each client knows what part of the problem it is supposed to work on. You could provide this information in the command line used to start the remote apps, in a config file in the directory with the exe, in a database table, or use a separate command-and-control type server that the clients connect back to (although with that approach you'll soon get to the stage where you would have been better off with learning to use an existing solution rather than rolling your own).
You may also want to include a remote 'kill switch' of some sort. You could use PsKill from SysInternals, or if you want a more graceful shutdown, something simple like the existence of a particular file in the same directory as the exe can serve as a flag for the remote processes to shut themselves down.
You could also consider adding CSScript support to the client so that the remote client programs are static and load and compile a CSScript file to do the work. This might be useful if you encounter some kind of difficulty in frequently redeploying and restarting the client programs, or if you need them to all be slightly different (you might write a program to generate separate script files for each client for example).
I developed a program (in C# Winforms) and distributed it through a Google site I created.
I got a comment from someone that it doesn't work without the DEP disabled (he has Windows 7).
I read a little about the DEP thing and I understand that it blocks any program that tries to run with the RAM that suppose to save to the windows system...
Is this something I did when I developed the program? I made a setup project for the program so it creates a msi file. Is there is a way to prevent my program from running those forbidden pieces on the RAM (if i understand that correctly of course)?
the link to my site if it helps -
https://sites.google.com/site/chessopeningmaster/
All .NET programs, at least since .NET 2.0 but possibly before, declare themselves DEP compatible. That's done by a flag in the header of the executable. You can see it when you run dumpbin.exe /headers on the EXE from the Visual Studio Command Prompt:
...
2 subsystem (Windows GUI)
8540 DLL characteristics
Dynamic base
NX compatible // <=== here
No structured exception handler
Terminal Server Aware
100000 size of stack reserve
....
"NX" means Never eXecute, a data execution prevention mechanism implemented in hardware by the processor. The Wikipedia article about it is pretty good.
This is enforced by any modern version of Windows (XP SP2 and later) and any modern processor. You can safely assume that your program is in fact DEP compatible if it executes properly on your machine.
So this user probably saw your program crash, for whatever reason, and started tinkering with the tools available to him. Like turning DEP enforcement off. Technically it is possible that this stopped the crash. That however doesn't mean that the program is operating correctly. It is most certainly doesn't mean that you should turn this option off. Which is technically possible by running editbin.exe with the /nxcompat:no option.
If you want to pursue this then you should ask the user for a minidump of the crashed process.
I have a data file and from time to time I need to write a change to the file. The change consists of changing information in more than one place. For example, changing some data near the end of the file and also changing some information near the start. I want the two separate writes to either both succeed or both fail, otherwise it is left in uncertain state and effectively corrupted. Is there any builtin support for this scenario in .NET or in general?
If not then how to others solve this issue? How does a database on Windows solve this issue?
UPDATE: I do not want to use the Transactional NTFS capability because it is not available on older version of Windows such as XP and it is slow in the file overwrite scenario as described above.
DB basically uses a Journal concept (at least those one I'm aware of). An idea is, that a write operation is written in journal until Writer doesn't commit a transaction. (Sure it's just basic description, it's so easy)
In your case, it could be a copy of your file, where you're going to write a data, and if everything finished with success, substitute original file with it's copy.
Substitution is: rename original file like a old, rename backup file like a original.
If substitution fails: this is a critical error, that application should handle via fault tolerance strategies. Could be that it informed a user about a failed save operation, and tries to recover. By the way in any moment you have both copies of your file. That one when write operation just started, and that one when write operation finished.
This techniques we used on past projects on VS IDE like systems for industrial control with pretty good success.
If you are using Windows 6 or later (Vista/7/2008/2008R2) the NTFS filesystem supports transactions (including within a distributed transaction): but you will need to use P/Invoke to call Win32 APIs (see this question).
If you need to run on older versions of Windows, or non-NTFS partitions you would need to perform the transactions yourself. This is decidedly non-trivial: getting full ACID functionality while handling multiple processes (including remote access via shares) across process and system crashes even with the assumption that only your access methods will be used (some other process using normal Win32 APIs would of course break things).
In this case a database will almost certainly be easier: there are a number of in-process databases (SQL Compact Edition, SQL Lite, ...) so a database doesn't require a server process.
I would like to be able to do an "inplace" update with my program. Basically, I want to be able to login remotely where the software is deployed, install it while other users are still using it (in a thin client way), and it update their program.
Is this possible without too much of a hassle? I've looked into clickonce technology, but I don't think that's really what I'm looking for.
What about the way firefox does it's updates? Just waits for you to restart the program, and notifies you when it's been updated.
UPDATE: I'm not remoting into the users' PC. This program is ran on a server, and I remote in and update it, the users run it directly off the server through remote access.
ClickOnce won't work because it requires a webserver.
I had some example code that I can't find right now but you can do something similar to Firefox with the System.Deployment.Application namespace.
If you use the ApplicationDeployment class, you should be able to do what you want.
From MSDN, this class...
Supports updates of the current deployment programmatically, and handles on-demand downloading of files.
Consider the MS APIs with BITS, just using bitsadmin.exe in a script or the Windows Update Services.
Some questions:
Are the users running the software locally, but the files are located on a networked share on your server?
Are they remoting into the same server you want to remote into, and execute it there?
If 2. are they executing the files where they are placed on the server, or are they copying them down to a "private folder"?
If you cannot change the location of the files, and everyone is remoting in, and everyone is executing the files in-place, then you have a problem. As long as even 1 user is running the program, the files will be locked. You can only update the files once everyone is out.
If, on the other hand, the users are able to run their own private copy of the files, then I would set up a system where you have a central folder with the latest version of the files, and when a user starts his program, it checks if the central folder has newer versions than the user is about to execute. If it does, copy the new version down first.
Or, if that will take too long, and the user will get impatient (what, huh, users getting impatient?), then having the program check the versions after startup, and remind the user to exit would work instead. In this case, the program would set a flag that upon next startup would do the copying, only now the user is aware of it happening.
The copying part would easily be handled by either having a separate executable that does the actual copying, and executing that instead, or the program could copy itself temporarily to another location and run that copy with parameters that says "update the original files".
While you can design your code to modify itself (maybe not in C#?), this is generally a bad idea. This means that you must restart something to get the update. (In Linux you are able to replace files that are in use, however an update does not happen until the new data is loaded into memory i.e. application restart)
The strategy used by Firefox (never actually looked into it) is storing the updated executable in a different file which is checked for when program starts to load. This allows the program to overwrite the program with the update before the resource is locked by the OS. You can also design you program more modular so that portions of it can be "restarted" without requiring a restart of the entire program.
How you actually do this is probably provided by the links given by others.
Edit:: In light of a response given to Lasse V. Karlsen
You can have your main program looking for the latest version of the program to load (This program wouldn't be able to get updates without everyone out). You then can remove older versions once people are no longer using it. Depending on how frequent people restart their program you may end up with a number of older programs versions.
ClickOnce and Silverlight (Out of browser) both support your scenario, if we talk about upgrades. Remote login to your users machine? Nope. And no, Firefox doesn't do that either as far as I can tell..
Please double-check both methods and add them to your question, explaining why they might not do what you need. Otherwise it's hard to move on and suggest better alternatives.
Edit: This "I just updated, please restart" thing you seem to like is one method call for Silverlight applications running outside of the browser. At this point I'm fairly certain that this might be the way to go for you.
ClickOnce doesn't require a webserver, it will let you publish updates while users are running the software. You can code your app to check for new update every few minutes and prompt the user to restart the app if a new version is found which will then take them through the upgrade process.
Another option is a Silverlight OOB application, but this would be more work if your app is already built as WinForms/WPF client app.
Various deployment/update scenarios (for .NET applications) are discussed with there pros and cons in Microsoft's Smart Client Architecture and Design Guide. Though a little bit old I find that most still holds today, as it is describing rather the basic architectural principles than technical details. There is a PDF version, but you find it online as well:
Deploying and Updating Smart Client Applications
Is this possible without too much of a hassle?
Considering the concurrency issues with thin clients and the complexity of Windows installations, yes hot updates will be a hassel without doing it the way the system demands.
Is it possible to update an application to a new version without closing it?
Or is there a good way to do that without user noticing it was closed?
Typically applications notice on startup that an update is available, then ask the user whether it's okay to update. They then start the update process and exit. The update process replaces the files, then launches the new version.
In some cases you may be able to get away with updating some pieces of an application without a restart - but the added complexity is significant, and frankly it's better not to try in 99% of cases, IMO.
Of course, you haven't said what kind of app you're writing - if you could give more information, that would help.
The application needs to be closed before updating it, because updating an application generally means replacing the executable files (.exe, .dlls, etc.) with their newer versions, and this can't be done without closing the application.
As Jon said, in some cases, you can upgrade the application without closing it. But, this is not advisable, as it might cause failure in the updater, and the whole update might rollback.
Updater can be another executable which will first close the main application, then download the updates, apply them, start the main application, and exit (An example of this is Skype, FireFox, etc.)
You could separate the backend into a separate process/module and update the the backend by restarting it without the user realizing it.
Updating the front end will be a bit trickier, but could be avoided or delayed, if necessary.
A nice and clean way to achieve this would be using dynamic plugins.
You can code your application heavily plugin-based. When an update is needed, unload the plugin that needs to be updated, update the .dll file and load it back into the application.
However, making this invisible to the user may be a tough job, therefore it depends heavily on your design and coding.
I remember InTime having the ability to swap exe's live, however that had to be carefully coded. I know it's possible but as Jon Skeet said, you're likely better off not trying.
Unless you're doing some kind of automation or something very serious... even then, you should consider a failover so you can shut one down / restart if needed.
If you has some some sort of skeletal framework which launched your application and dlls, you could look at CreateDomain. It will take serious design efforts on your part though. Good luck!