how to read files from uncompressed zip in c#? - c#

I´m creating a PDA app and I need to upload/download a lot of small files and my idea is to gather them in an uncompressed zip file.
The question is: It´s a good idea to read those files from the zip without separating them? How can I do so? Or is it better to unzip them? Since the files are not compressed my simple mind points that maybe reading them from the zip it´s more or less as efficient as reading them directly from the file system...
Thanks for you time!

Since there are two different Open-source libraries (SharpZipLib and DotNetZip Library) to handle writing & extracting files from a zip file, why worry about doing it yourself?

ewww - don't use J#.
The DotNetZip library, as of v1.7, runs on the .NET Compact Framework 2.0 and above. It can handle reading or writing compressed or uncompressed files within a ZIP archive. The source distribution includes a CF example app. It's really simple.

Sounds as if you want to use the archive to group your files.
From a reading the files point of view, it makes very little difference if the files are handled one way or the other. You would need to implement the ability to read zip files, though. Even if you use a lib like James Curran suggested, it means additional work, which can mean additional sources of error.
From the uploading the files point of view, it makes more sense: The uploader could gather all the files needed and would have to take care of only one single upload. This reduces overhead as well as error handling (if one uplaod fails, do you have to delete all files of this group already uploaded?).
As for the efficiency of reading them from the archive vs. reading them directly from the disc: The difference should be minimal. You (or your zip library) need to once parse the zip directory structure, which is pretty straight forward. The rest is reading part of a file into memory vs. reading a file into memory.

Related

Effeciently and atomically exchange two files in Windows

My naive solution to maintain atomicity is to open streams on both files and exchange the contents via a temporary file. However, I understand that File.Move is much more efficient when both files exist on the same drive because no data is actually copied.
Unfortunately, C#'s File.Move requires that the destination file not exist, so it is impossible to use for an atomic exchange of two files.
Is there a way to ensure neither file will be touched during the exchange and still gain the efficiency of renaming files that exist on the same drive.
Preferably, I'm looking for a solution with C#, though I'm not against using a P/Invoke if there is a lower level way to achieve this. My understanding is that OSX can achieve this via exchangedata() and Linux can achieve this via renameat2(). Anything similar for Windows?

should you include file in xml or have it in a two step process?

I have to implement a way to transfer between many organizations(unknown number) some information, name/address/etc, and a unknown number of files associated to that information.
when I'm saying unknown files, it could be a xml file of over 100 meg, if they are embedded
the transfer will be done over xml so the question is;
should i allow embedded files using base64 in elements or have a 2 steps process which would be
send me the xml file with a kind of pointer in a element, let say filenames
send the files with the specific filenames in the xml
or is there a third solution?
I have to deserialize the xml into an object, do some manipulation then saving it in a database.
(I currently have a throw away prototype using the 2 steps process)
Don't put the files in the XML, this would make it unwieldy. Instead, reference the file names from the XML and then zip the XML and files up into one bundle and send that.
Be sure to consider the expected evolution of the data, how change occurs across the parts of the document, and how many parties have an interest in the updates.
At the one end of the spectrum, the data will never change, the parts are all static, and updates aren't an issue to anyone. A one-shot broadcast of a single large file (or zipped set of files) is good enough. I'd lean toward a zipped archive with linked components over an embedding/encoding solution here.
The other end of the spectrum calls for a "third solution," as you say. The data changes frequently and independently, some parts of the massive document change while others remain constant, and many parties are interested in having access to the current version of the evolving data. Here, a linked representation of the various parts of the resource as references to network-shared parts, possibly independently version controlled, would have a major advantage. Linked data is a robust solution worth considering over monolithic distribution of a massive file.

Extract .tbz file using .net

I'm trying to extract a .tbz file using .net
Anyone have any suggestions?
The file will be very large (3GB) if this makes any difference.
First you'll need to know about, how to implement, or use a supporting library for BZIP2 decompression. Then you will need to do the same, in a different fashion, and de-archive the contents of the resulting TAR file.
You can use SharpZipLib:
http://www.icsharpcode.net/opensource/sharpziplib/
However, it is licensed under the GPL, with an exception that I'm not familiar with, so you'll have to read carefully to see if it suits your needs.

Editing large binary files

I'm busy with a little project which has a lot of data like images text files and other things and I'm trying to pack it all up in one big file or multiple big files so the program folder doesn't look messy.
But the problem is how can I edit these files. I've thought about the file structure and it's going to be something like this:
[DWORD] Number of files
[DWORD]FileId
[STRING]FileName
[DWORD]FileSize
[DWORD]FileIndex
[BYTES]All the files
So the first part is too quickly get a list of all the files and the FileIndex is the Position in the binary file so I can set the pointer too for example 300 and read the file.
But if I want to create a patch and edit it I would have to read all the bytes after the file i'm editing and copy them all back which could take ages with a couple of files.
The binary file could be a few 100 mb's when all the files are inserted.
So how do other programs do this for example games use these big files and also patch a lot is there some kind of trick to insert extra bytes more quickly?
There is no "trick" to inserting bytes in the middle of a file.
Usually solutions involve adding files to the end of the file, then switching their position in the index. Then you run into the problem of having to defragment the file. You can break files into large chunks which can mitigate some of the defragmentation woes, but then the files are not contiguous.
If you are dealing with non-static data, I would not recommend doing this unless you absolutely have to. I've seen absolutely brilliant software engineers take a considerable amount of time to write a reasonable implementation of this.
Using sqlite as a virtual file system can be a viable solution to this. But then again, so is putting the data files in another folder so it doesn't look "messy".
If at all possible, I'd probably package the data up into a zip file. This will not only clean up your directory, but (especially for the text files you mention) throw in some compression essentially for free. There are also, of course, quite a few existing tools and libraries for creating, examining, modifying, etc., a zip file.
Using zlib (for one example), most of the work is handled for you (e.g., as demonstrated in minizip).
The trick is to make patches by overwriting the data. Otherwise, there are systems available to manage large volumes of data, for example databases.
You can create a database file that will accompany your program, and hold all your data there, and not in files. You can even embed the database code in your application, with SQLite, for example, or use external DB's like Sql Server, Oracle SQL, or MySql.
What you're describing is basically implementing your own file system. Its a tricky and a very difficult task to make that effective.
You could treat the packing and editing program sort of like a custom memory allocator:
Use a minimum block size - When you add a file, use enough whole
blocks to fit the file. This automatically gives the files some room
to grow without effecting the others.
When a file gets too big for its current allocation, move it to the end of the package.
Mark the free blocks as free, and keep the offset to the head of the
free list in the package header. When adding other files, first
check to see if there is a free block big enough for them.
When extending files past their current block, check to see if the following block is on the free list.
If the free list gets too long (too much fragmentation), consolodate the package. Move each file forward to start in the first free block. This will have to re-write the whole file, but it would happen rarely.
Alternately, instead of the simple directory you have, use something like a FAT. For each file, store a list of chunks and sizes. When you extend a file past its current allocation, add another chunk with the remainder. Defragment occasionaly as needed.
Both of these would add a little overhead to the package, but leaving gaps is really the only alternative to rewriting the whole thing on every insert.
The is not way to insert bytes into a file other than the one you described. This is independent of the programming language. It's just how file systems work...
You can overwrite parts of the file, but only as long as you respect the byte count.
Have you thought about using a .zip file? I keep seeing formats out there where multiple files are stored as one, and the underlying file is really a zip file. The nice thing about this is that the zip library handles the low-level bit-tracking stuff for you.
A couple examples that come to mind:
A Word .docx file is really a zip (rename one to .zip, and you can open it -- it has whole folders in it)
The .xap file that Silverlight packages use is another one.
You can use a managed shared memory, supported by memory mapped file. You still have to have sufficient address space for the whole file, but you don't need to copy the whole file into memory. You can use most standard facilities with shared memory allocator, though you can quickly find that specifying custom allocator everywhere is a chore. But the good news is that you don't need to implement it all yourself, you can take Boost.Interprocess and it already has all necessary facilities for both unix and windows.

Attaching arbitrary data to DirectoryInfo/FileInfo?

I have a site which is akin to SVN, but without the version control.. Users can upload and download to Projects, where each Project has a directory (with subdirs and files) on the server. What i'd like to do is attach further information to files, like who uploaded it, how many times its been downloaded, and so on. Is there a way to do this for FileInfo, or should I store this in a table where it associates itself with an absolute path or something? That way sounds dodgy and error prone :\
It is possible to append data to arbitrary files with NTFS (the default Windows filesystem, which I'm assuming you're using). You'd use alternate data streams. Microsoft uses this for extended metadata like author and summary information in Office documents.
Really, though, the database approach is reasonable, widely used, and much less error-prone, in my opinion. It's not really a good idea to be modifying the original file unless you're actually changing its content.
As Michael Petrotta points out, alternate data streams are a nifty idea. Here's a C# tutorial with code. Really though, a database is the way to go. SQL Compact and SQLite are fairly low-impact and straightforward to use.

Categories