I have a portable audio player which lists directories and files (SD card FAT32 storage), as per the manual, in the order they are copied. That is indeed what it does: when manually copying one file at a time, it will display those files in the order copied. Now I already copied hundreds of files using a combination of Windows Explorer and Unison so as a result everything gets displayed in what is basically random order. I'd like everything to be listed alphabetically, so I wrote the simplest piece of C# code I can think of that lists directories alphabetically, idem for the files inside them, and uses Directory.SetLastWriteTime and File.SetLastWriteTime to set the time on each, and that time is incremented with 2 seconds on each iteration.
Reading everything back in using GetLastWriteTime confirms the timestamps are updated, however the player does not do what would be excpected.. It does order directories correctly, and some of the files (probably those that were already copied in alphabetical order before), but not all of them. Tried everything again using SetCreationTime but the same result.
So which file property has to be set then in order to make this work? Or is it some property of the FAT drive itself? And which functions to use? (eventually plain C apis are fine as well)
fatsort claims to do this so you can study its source code to find out how to re-order files.
Note that what you want to do is a fairly involved process because it relies on the internal structures of FAT, but it is doable.
Related
I have a boat load of old floppy discs that have images on them. I want to copy them, but they have file names that are often duplicated. I'd have a batch file that will copy and rename the files, but I have to run it every time I insert the disc. I was trying to make a C# application to detect when the status of the drive changes and then automatically copy and rename the files based on the current date/time.
Thanks in advance.
Since this is for yourself, you don't need a bullet proof solution.
Heres the high level algorithm that could be done easily in c#
1. Make a function that lists all files in the floppy drive and return if it succeed. Call it something like FloppyReady()
2. Loop until FloppyReady returns true
3. Copy all files and do your renaming scheme
4. Loop until FloppyReady returns false (floppy removed)
5. Goto #2
Imagine there is a game with a lot of content, like car models.
I don't want to have them in the RAM all the time to save memory and want to load them only when needed. This would be easily when having one file per car - but I would have so many files at the end and the project would become hard to share etc.
But when I store everything into one file, I don't know where what is.
Is there a way storing everything into one file and navigate to the content as easily as if I do one file per content entry? Only thing I can imagine is saving the byte positions where something begins in a 2nd file (or as header in the content file) but I'm very unsure about that solution.
A simple way to do it is when you write the file you keep track (in memory) where stuff is. Then, at the end of the file you write an index. So your file looks like this:
offset 0: data for object 1
offset 0x0473: data for object 2
offset 0x1034: data for object 3
etc.
You can write this file using BinaryWriter. Before you write each object, you query writer.BaseStream.Position to get the current position of the file. You save that information in memory (just the object name and its position).
When you're done writing all of the objects, you save the current position:
indexPosition = writer.BaseStream.Position;
Then write the index at the end of the file:
name: "object 1", position: 0
name: "object 2", position: 0x0473
etc.
Write an empty index entry to signify the end of objects:
name: "", position: 0xFFFFFFFF
And the last thing you do is write the index position at the end of the file:
writer.Write(indexPosition);
Now, you can open the file with a BinaryReader. Seek to end-of-file minus 8 bytes, read the long integer there. That gives you the position of the index. Seek to the index and start reading index entries forward until you get to one that has a position of 0xFFFFFFFF.
You now have the index in memory. You can create a Dictionary<string, long> so that, given an object name, you can find its position in the file. Get the position, seek there, and read the object.
This kind of thing was pretty common when we were writing games in the late '90s. You can still do it, although you're probably better off going with a simple database.
There are quite a few different compression and archiving methods you could use to hold your files if you're looking to store them temporarily in a larger file for transport. Almost any compression method could work, which you choose is entirely up to you.
Example .zip compression can be found here:
http://msdn.microsoft.com/en-us/library/ms404280.aspx
There's also the .cab file format that you can easily pack multiple files into and unpack later when you have need for them. There's an interesting article on creating .cab files in C# found here:
http://mnarinsky.blogspot.com/2009/11/creating-cab-file-in-c.html
It does require that you add references to Microsoft.Deployment.Compression.Cab.dll and Microsoft.Deployment.Compression.dll but the code itself is fairly simple after that. If you find none of those to be a suitable answer to your question, my only other suggestion is that you better organize the files rather than cramming them all into a single folder as that can make it quite difficult to navigate.
Also try using a collection to keep track of the file names if that helps too. You could define files in the XML and load everything into a dictionary or hash table if it needs to be more dynamic or define it in the code itself if you prefer that.
EDIT:
You can also try using a third party installer for transport. They offer many functions outside of compressing and packing files and will handle the data compression for you. I prefer NSIS personally as I find it to be highly user friendly but any installer can work. A few example installers are:
Installshield: http://www.installshield.com/ (integrates with Visual Studio)
WIX: http://wix.sourceforge.net/ (also integrates with Visual Studio, good if you're looking for something more XML based)
NSIS: http://nsis.sourceforge.net/Main_Page (scriptable, doesn't integrate with Visual Studio, easy guided design with Nsisqssg if you'd prefer not to do the bulk of the scripting on your own)
They all function differently but essentially achieve the same end result. It all depends on exactly what you're looking for.
I have an idea for a C# program that works basically like the Windows Explorer. The aim is to display all files and folders, and to show specific information for each of them. One of the features I'm planning is to detect folder sizes which is something the Explorer cannot.
My idea for the algorithm is to cumulate the sizes of all files in the specific folder. However, I'm afraid of performance issues. For example, for displaying the sizes of all folders of C: I have to consider all the files on the whole drive. This will probably take a while and thus the calculation can't be done each time the user switches to a different folder or back.
So I'd like to cache some of the sizes. However, when files change, are added or removed, the cache data becomes outdated. But I do not want to monitor all file changes while the program is not running.
Is there any way I can find out if the cache is up-to-date, e.g. by retrieving some sort of checksum that doesn't require calculating all sizes again? Is there another memory and CPU-efficient way to find out if file sizes have changed since the last calculation? Or is there even another possibility?
Windows Explorer has the Folder size available (# files, size on disk etc) availble for the properties of any disk/folder. Directory Properties Example
As for writing a program, you can certainly use a recurisve DirectoryInfo.EnumerateFiles() to get all the files within a disk/folder.
As for monitoring, you can use the FileSystemWatcher class to monitor changes to any disk/folder.
To keep the cache up to date is going to be difficult because:
Depending on the Partition Formated Type [Fat, Fat32, NTFS, etc] you are limited to what each support.
Any new file (created date > cache date) means you still have to enumerate all the files to filter the list to new files.
Modified files (modified date > cache date) has the same issue.
Unless you use something VERY specific to the Formatted Type beyond what C# provides, updating a cache after the application launch will need to occur every time, and be very intense.
Windows Explorer is a pretty crafty program. It is filled with tricks that are designed to hide the fact that any file system is punishingly slow to iterate. The kind of tricks that I know about:
fake it. Show the folder hierarchy as a treeview and use the [+]
glyph to show that a folder has files or directories inside of it.
Even when it doesn't. That's visible, create an empty directory and
restart your machine. Note the [+] glyph, click it and notice that,
when forced to iterate the sub-directory, it smoothly change the [+]
glyph to a 'nothing there' glyph.
delay it. Harder to see, you need a subdirectory with a lot of
files. Explorer starts a background thread that iterates the
content of the folder. Once it figured it out, it smoothly changes
the status bar text.
tell me what happened. Explorer uses ReadDirectoryChangesW()
heavily. Wrapped in .NET by the FileSystemWatcher class. Key point
is that it gets a notification that something changed in the
subdirectory that the user is looking at. No polling required, that
would have horrible perf. Go back to bullet two.
I'm creating an WP7 app that shows an inspirational text for every day and allows you to mark some of this texts as favorites. You can see the text for today, jump to an day in the calendar oder browse your favorites.
All texts are known prior roll out / installation, I don't want to lazy load them via cloud/web, I want to "install" them together with the app.
How should I store them? Should I use one of the open source databases for WP7 and create all rows on installation? Should I just hardcode them and save the favorites in an IsolatedStorage file?
EDIT: Is it possible to have the read only data in a XML file in the Visual Studio Project and mark it as a ressource? Will this later roll out the file automatically? Does this make sense?
If your concern is speed of loading / efficiency of reading the files then you'll have to test to see what works best. I'd start with what's simplest to implement and then change if necessary.
What is right for your app will depend on the total size of data and the size of individual pieces of text. As well as considering where you store the data, be sure to also consider the format you store it in as deserialization/parsing is also an overhaed you should consider.
Remember to test this on an actual device as the performance you see on the emulator is not likely to be realistic of what your users will see.
Update
If it's readonly data you probably want to add it as multiple content files (set the build Action) within the XAP.
The format of the files and how you divide the data between them will depend on the data and the app.
Having multiple files means you don't have to load all the data at once. (I assume you don't need to do that.) Just open the file you need.
Update 2
For an example of loading a resource file from the XAP see: http://blogs.msdn.com/b/silverlight_sdk/archive/2010/08/27/loading-a-static-xml-file-to-your-windows-phone-silverlight-app.aspx
I am creating an application to back up files from a source directory into a destination directory. I store the files information from the source and destination folders in separate lists then I compare the lists based on their size, date modified etc to see which files need to be copied.
Anyways the point is that I end up with a list of the files that need to be copied and I will like to know how much time is every file taking therefore I have tried the following techniques:
Technique 1
Technique 2
Thechnique 3 : the regular File.Copy("source....","Destination")
The first two techniques are great because I can see the progress. The problem is that when I copy some files with those techniques, the new file sometimes has different dates. I will like both files to have the same modified date and also the same creation date. Moreover if for whatever reason my program crashes the file that is being copied will be corrupted because I have tried copying a large file ( a file that takes about a minute to get copied in windows) if I exit my program meanwhile the file is being copied the file that is being copied sometimes has the same attributes and the same size so I want to make sure I don't have corrupted files in case my program crashes.
Maybe I should use aether techniques 1 or 2 and then at the end copy the attributes from the source file and assign those to the destination file. I don't know how to do that though.
FileInfo has members CreationTime and LastWriteTime that are settable - so you could settle for your preferring techniques and set the dates afterwards if that helps.
Have you considered just writing a shell script that calls robocopy? Any time I've had to run backup tasks like this, I just write a script -- robocopy already does the heavy lifting for me, so there's often no need to create a bespoke application.
A solution that I have but its long:
I know I can copy the file from the source and then name the file in the destination something else like "fileHasNotBeenCopiedYet" with attributes of hidden then when my program finishes copying the file change the name to the source name and copy the attributes and then latter I know that if a file with that name ("fileHasNotBeenCopiedYet") exists that means that the file is corrupted.