I have a portable executable that saves data to a file in the same folder as the executable. Is there any way that I can save data into the executable itself when I close the app?
This maybe weird, but taking the data with me and only have one file for the exe and data would be great.
Would prefer if this was made with C#, but is not a requisite.
You cannot modify your own EXE to contain stored data in anything approaching an elegant or compact way. First off, the OS obtains a lock on the EXE file while the application contained within is being run. Second, an EXE comes pre-compiled (into MSIL at least), and modification of the file's source data usually requires recompilation to reset various pointers to code handles, or else a SERIOUS knowledge on a very esoteric level about what you're doing to the file.
The generally-accepted methods are the application config file, a resource file, or some custom file you create/read/modify at runtime, like you're doing now. Two files for an application should not be cause for concern
You can, by reserving space through the means of using a string resource and pad it out. You need to do a bit of detective work to find out exactly where in the offset to the executable you wish to dump the data into, but be careful, the file will be "in use", so proceed cautiously with that.
So right now you're using an app.config (and Settings.settings) file?
I believe this is the most compact way to save data close to the .exe.
I would highly doubt you can alter the manifest of the .exe, or any other part of it.
Edit: Apparently, there might be some ways after all: http://www.codeproject.com/KB/msil/reflexil.aspx
There is one way using multiple streams, but only works in NTFS filesystems.
NTFS allows you to define alternative "named" streams in one file. The usual content is in the main = unnamed stream. It has something to do with the extra info you can see when you right click a file and check properties.
Unfortunatly C# has no support for multiple streams, but there are open source pojects that can help you.
See this link for a nice wrapper to read and write multiple streams to one single file in C#
Alternate data streams might work. By using the ::stream syntax you can create a data stream within your exe and read/write data.
Edit:
to create/access an alternate data stream, you will use a different filename. Something like:
applicAtion.exe:settings:$data
this will access a data stream named "settings" within application.exe. To do this you need to add the :settings:$data to the filename when reading or writing to the file. This functionality is provided by ntfs so it shold work in c# and should work when the application is running.
Additional information is available at: http://msdn.microsoft.com/en-us/library/aa364404(VS.85).aspx
If you want to take the data with you and only have one file for the exe and data, .zip them into a self-extracting .exe.
you can add data to end of executable file :
var executableName = Process.GetCurrentProcess().MainModule.FileName;
// rename executable file
var newExecutableName = fullPath.Replace(".exe", "_.exe");
FileInfo fi = new FileInfo(executableName);
fi.MoveTo(newExecutableName);
// make copy of executable file to original name
File.Copy(newExecutableName, executableName);
// write data end of new file
var bytes = Encoding.ASCII.GetBytes("new data...");
using (FileStream file = File.OpenWrite(executableName))
{
file.Seek(0, SeekOrigin.End);
file.Write(bytes, 0, bytes.Length);
}
// we can delete old file when exited
Related
I would like to take a serialized file and save it to my recourses folder in project.
My reason for doing this (maybe there's a better way) is I have a published exe (single executable file) for the program that runs and when it creates a serialized file I don't want it to save it to desktop. I need to somehow save it to my exe without going outside of it.
Any advice on how I could do this?
It's very ugly.....but you could use an "alternative data stream" on NTFS system.
http://ntfs.com/ntfs-multiple.htm
https://learn.microsoft.com/en-us/sysinternals/downloads/streams
How to read and modify NTFS Alternate Data Streams using .NET
https://blogs.msmvps.com/bsonnino/2016/11/24/alternate-data-streams-in-c/
https://oddvar.moe/2018/04/11/putting-data-in-alternate-data-streams-and-how-to-execute-it-part-2/
https://blog.foldersecurityviewer.com/ntfs-alternate-data-streams-the-good-and-the-bad/
https://www.irongeek.com/i.php?page=security/altds
You'll probably have security scanners stopping you from doing it.
In addition if you copy the from an NTFS volume to say FAT, then alternative data streams are lost.
Also some backup software may not backup ADS properly.
https://wiki.sep.de/wiki/index.php/Support_for_NTFS_alternate_data_streams_(ADS)_for_Windows
https://www.2brightsparks.com/resources/articles/ntfs-alternate-data-stream-ads.html
https://community.osr.com/discussion/89308/alternate-data-streams-and-backups
https://social.technet.microsoft.com/Forums/Azure/en-US/007d5442-1cd8-4293-b717-b8fa72606189/ntfs-data-streams-broken-by-design-on-file-copy?forum=winserverfiles
I get the file version this way:
var fileVersion = FileVersionInfo.GetVersionInfo(path).FileVersion
But this option is not suitable for me, since I have to use a non-native tool to get the file that returns the stream. Can I get the file version from this stream or from an array of bytes?
Unfortunately, you cant do this directly
you should
Write the file to disk in some sort of temporary location
Read the version from the file on disk
Delete the file
In short, no, what you want is not possible with the current tools. The problem is that, as you've noticed, FileVersionInfo.GetVersionInfo relies on a physical file to be present on disk. If you look at its internals, you'll see that all it really does is to delegate to the Windows API which does the real work, precisely in the GetFileVersionInfo function, which in turn also takes a file name as parameter, so it's only designed to operate from the filesystem.
A possible workaround would be to drop a temp file with the binary you got from your stream, get the version info you need, then delete the file.
Another option would be to look for a library that can parse in-memory exe/dll files and extract the relevant details directly from there.
As part of our installer build, we have to zip thousands of large data files into about ten or twenty 'packages' with a few hundred (or even thousands of) files in each which are all dependent on being kept with the other files in the package. (They are versioned together if you will.)
Then during the actual install, the user selects which packages they want included on their system. This also lets them download updates to the packages from our site as one large, versioned file rather than asking them to download thousands of individual ones which could also lead to them being out of sync with others in the same package.
Since these are data files, some of them change regularly during the design and coding stages, meaning we then have to re-compress all files in that particular zip package, even if only one file has changed. This makes the packaging step of our installer build take well over an hour each time, with most of that going to re-compressing things that we haven't touched.
We've looked into leaving the zip packages alone, then replacing specific files inside them, but inserting and removing large files from the middle of a zip doesn't give us that much of a performance boost. (A little, but not enough that its worth it.)
I'm wondering if its possible to pre-process files down into a cached raw 'compressed state' that matches how it would be written to the zip package, but only the data itself, not the zip header info, etc.
My thinking is if that is possible, during our build step, we would first look for any data file that doesn't have a compressed cache associated with it, and if not, we would compress that file and write the result to the cache.
Next we would simply append all of the caches together in a file stream, adding any appropriate zip header needed for the files.
This would mean we are still recreating the entire zip during each build, but we are only recompressing data that has changed. The rest would just be written as-is which is very fast since it is a straight write-to-disk. And if a data file changes, its cache is destroyed, so next build-pass it would be recreated.
However, I'm not sure such a thing is possible. Is it, and if so, is there any documentation to show how one would go about attempting this?
Yes, that's possible. The most straightforward approach would be to zip each file individually into its own associated zip archive with one entry. When any file is modified, you replace its associated zip file to keep all of those up to date. Then you can write a simple program to take a set of those single entry zip files and merge them into a single zip file. You will need to refer to the documentation in the PKZip appnote. Take a look at that.
Now that you've read the appnote, what you need to do is use the local header, data, and central header from each individual zip file, write the local header and data as is sequentially to the new zip file, and save the central header and the offsets of the local headers in the new file. Then at the end of the new file save the current offset, write a new central directory using the central headers you saved, updating the offsets appropriately, and ending with a new end of central directory record with the offset of the start of the central directory.
Update:
I decided this was a useful enough thing to write. You can get it here.
You could zip each file before hand, and then "zip" them together with no compression at the end to quickly aggregate them into a distributable package. It won't be as efficient as compressing all the data at once, but should be faster to make modifications.
I cannot seem to locate an actual exe that implements this type of functionality. It appears that most existing tools I've tried that have the ability to merge/update will reprocess(compress) the data stream as you have already stated you saw.
However it seems what you describe can be done if you or someone wants to write it. If you take a look at this link for the ZIP file format specification, you can get an overview of the structure you would have to parse out and process. It looks like you can pretty quickly go from file to file gathering up and discarding the files of interest, then merging in your new/updated files. You would still need to rebuild a new central directory (refer to section 4.3.6 of the above linked document) within your new destination archive.
After a little more digging, the DotNetZip Library forum has a message asking about the same type of functionality which also gives a description just like I described above. It also links to this document which seems to indicate that support for that may be added to the DotNetZip library for you to further experiment with.
I am building an interface whose primary function would be to act as a file renaming tool (the underlying task here is to manually classify each file within a folder according to rules that describe their content). So far, I have implemented a customized file explorer and a preview window for the files.
I now have to find a way to inform a user if a file has already been renamed (this will show up in the file explorer's listView). The program should be able to read as well as modify that state as the files are renamed. I simply do not know what method is optimal to save this kind of information, as I am not fully used to C#'s potential yet. My initial solution involved text files, but again, I do not know if there should be only one text file for all files and folders or simply a text file per folder indicating the state of its contained items.
A colleague suggested that I use an Excel spreadsheet and then simply import the row or columns corresponding to my query. I tried to find more direct data structures, but again I would feel a lot more comfortable with some outside opinion.
So, what do you think would be the best way to store this kind of data?
PS: There are many thousands of files, all of them TIFF images, located on a remote server to which I have complete access.
I'm not sure what you're asking for, but if you simply want to keep some file's information such as name, date, size etc. you could use the FileInfo class. It is marked as serializable, so that you could easily write an array of them in an xml file by invoking the serialize method of an XmlSerializer.
I am not sure I understand you question. But what I gather you want to basically store the meta-data regarding each file. If this is the case I could make two suggestions.
Store the meta-data in a simple XML file. One XML file per folder if you have multiple folders, the XML file could be a hidden file. Then your custom application can load the file if it exists when you navigate to the folder and present the data to the user.
If you are using NTFS and you know this will always be the case, you can store the meta-data for the file in a file stream. This is not a .NET stream, but a extra stream of data that can be store and moved around with each file without impacting the actual files content. The nice thin about this is that no matter where you move the file, the meta-data will move with the file, as long as it is still on NTFS
Here is more info on the file streams
http://msdn.microsoft.com/en-us/library/aa364404(VS.85).aspx
You could create an object oriented structure and then serialize the root object to a binary file or to an XML file. You could represent just about any structure this way, so you wouldn't have to struggle with the
I do not know if there should be only one text file for all files and folders or simply a text file per folder indicating the state of its contained items.
design issues. You would just have one file containing all of the metadata that you need to store. If you want speedier opening/saving and smaller size, go with binary, and if you want something that other people could open and view and potentially write their own software against, you can use XML.
There's lots of variations on how to do this, but to get you started here is one article from a quick Google:
http://www.codeproject.com/KB/cs/objserial.aspx
I am storing attachments in my applications.
These gets stored in SQL as varbinary types.
I then read them into byte[] object.
I now need to open these files but dont want to first write the files to disk and then open using Process.Start().
I would like to open using inmemory streams. Is there a way to to this in .net. Please note these files can be of any type
You can write all bytes to file without using Streams:
System.IO.File.WriteAllBytes(path, bytes);
And then just use
Process.Start(path);
Trying to open file from memory isn't worth the result. Really, you don't want to do it.
MemoryStream has a constructor that takes a Byte array.
So:
var bytes = GetBytesFromDatabase(); // assuming you can do that yourself
var stream = new MemoryStream(bytes);
// use the stream just like a FileStream
That should pretty much do the trick.
Edit: Aw, crap, I totally missed the Process.Start part. I'm rewriting...
Edit 2:
You cannot do what you want to do. You must execute a process from a file. You'll have to write to disk; alternatively, the answer to this question has a very complex suggestion that might work, but would probably not be worth the effort.
MemoryMappedFile?
http://msdn.microsoft.com/en-us/library/system.io.memorymappedfiles.memorymappedfile.aspx
My only issue with this was that I will have to make sure the user has write access to the path where I will place the file...
You should be able to guarantee that the return of Path.GetTempFileName is something to which your user has access.
...and also am not sure how I will detect that the user has closed the file so I can delete the file from disk.
If you start the process with Process.Start(...), shouldn't you be able to monitor for when the process terminates?
If you absolutely don't want to write to disk yourself you can implement local HTTP server and serve attachemnts over HTTP (like http://localhost:3456/myrecord123/attachment1234.pdf).
Also I'm not sure if you get enough benefits doing such non-trivial work. You'll open files from local security zone that is slightly better then opening from disk... and no need to write to disk yourself. And you'll likely get somewhat reasonable warning if you have .exe file as attachment.
On tracking "process done with the attachment" you more or less out of luck: only in some cases the process that started openeing the file is the one that is actually using it. I.e. Office applications are usually one-instance applications, and as result document will be open in first instance of the application, not the one you've started.