I am currently learning C# during my studies and I am writing a small movie database application in it. I know its not good to save pictures (etc) inside the database, especially when adding movie covers that are rather big. But I don't want the files to just get saved in a folder as this creates a mess if more and more movies are added to the list.
Does anyone know a way to store the files in some sort of a container file (like covers.xxx). Then that container contains all covers in one big file, the files can then be retrieved with an address or the name.
Thanks :)
http://dotnetzip.codeplex.com/
Use above library and following code snippet.
using (ZipFile zip = new ZipFile())
{
// add this map file into the "images" directory in the zip archive
zip.AddFile("c:\\images\\personal\\7440-N49th.png", "images");
// add the report into a different directory in the archive
zip.AddFile("c:\\Reports\\2008-Regional-Sales-Report.pdf", "files");
zip.AddFile("ReadMe.txt");
zip.Save("MyZipFile.zip");
}
I can't see why storing the files as binary in the db is necessarily a bad idea.
However if it's definitely out then it sounds like you want a way to store a non-compressed compilation of files - basically a .zip that isn't compressed. You could achieve this for yourself by creating a file that is simply the data of the files you want appended together with some sort of unique header string in between them that you can split on when you read from the file. Ultimately this is simulating a basic file DB so I'm not sure what you'd accomplish but it's an idea.
Related
This is a very specific question. I am creating a coversheet with data from existing PDFs located on a network drive. I have built an application that will find, sort and then add the coversheet in the first position finally merging them together by group.
However, in order to do this, I have to save the coversheet somewhere so it can then be picked up and merged. I simply insert the new file location into a List of file locations, then it picks them up and merges them in the order I created.
My question is, is there a way to use MemoryStream to create the file in memory, then add it to the rest of the PDFs that are being added by drive location using FileStream? Everything I have seen requires it be saved somewhere before adding. Since I am using a network drive, I would love to avoid saving a temp file that then just gets deleted.
I have a requirement likme i do have a bunch of folders and each folder contains some set of .pdf,.doc,.xls.I need to upload a folder at a time to database and read the folder data data.in database it'll be stored in the format of file type,file name,file size.I have to do this in C#.Plz some one help me urgently.Thanks in Advance...
Regards,
Snehasis
How-to Guide for iterating through files and folders here could be a good starting point. The sample code described in the guide, iterates through folders recursively and save list of files inside any folder in System.IO.FileInfo[] array, using which you can get all your required information about a particular file such as name, type etc.. and then save it in database.
I am building an interface whose primary function would be to act as a file renaming tool (the underlying task here is to manually classify each file within a folder according to rules that describe their content). So far, I have implemented a customized file explorer and a preview window for the files.
I now have to find a way to inform a user if a file has already been renamed (this will show up in the file explorer's listView). The program should be able to read as well as modify that state as the files are renamed. I simply do not know what method is optimal to save this kind of information, as I am not fully used to C#'s potential yet. My initial solution involved text files, but again, I do not know if there should be only one text file for all files and folders or simply a text file per folder indicating the state of its contained items.
A colleague suggested that I use an Excel spreadsheet and then simply import the row or columns corresponding to my query. I tried to find more direct data structures, but again I would feel a lot more comfortable with some outside opinion.
So, what do you think would be the best way to store this kind of data?
PS: There are many thousands of files, all of them TIFF images, located on a remote server to which I have complete access.
I'm not sure what you're asking for, but if you simply want to keep some file's information such as name, date, size etc. you could use the FileInfo class. It is marked as serializable, so that you could easily write an array of them in an xml file by invoking the serialize method of an XmlSerializer.
I am not sure I understand you question. But what I gather you want to basically store the meta-data regarding each file. If this is the case I could make two suggestions.
Store the meta-data in a simple XML file. One XML file per folder if you have multiple folders, the XML file could be a hidden file. Then your custom application can load the file if it exists when you navigate to the folder and present the data to the user.
If you are using NTFS and you know this will always be the case, you can store the meta-data for the file in a file stream. This is not a .NET stream, but a extra stream of data that can be store and moved around with each file without impacting the actual files content. The nice thin about this is that no matter where you move the file, the meta-data will move with the file, as long as it is still on NTFS
Here is more info on the file streams
http://msdn.microsoft.com/en-us/library/aa364404(VS.85).aspx
You could create an object oriented structure and then serialize the root object to a binary file or to an XML file. You could represent just about any structure this way, so you wouldn't have to struggle with the
I do not know if there should be only one text file for all files and folders or simply a text file per folder indicating the state of its contained items.
design issues. You would just have one file containing all of the metadata that you need to store. If you want speedier opening/saving and smaller size, go with binary, and if you want something that other people could open and view and potentially write their own software against, you can use XML.
There's lots of variations on how to do this, but to get you started here is one article from a quick Google:
http://www.codeproject.com/KB/cs/objserial.aspx
I have numerous byte[] representing pdf's. Each byte array needs to be loaded at the start of the application and shown as a thumbnail on my gui. So far I have managed to write the files to a temp location using:
System.IO.Path.GetTempPath();
Then using this path I get all pdf files, write them using
System.IO.File.WriteAllBytes(fileName, arrayOfPdfs[i]);
and then navigate to that directory, get all pdf files and turn them in to thumbnails in my app.
The thing is I only want the pdfs I have just put in the temp location only, so how else can I store the pdfs, or even where else can I store them so I know when I come to turn them in to thumbnails, the files I am reading are the ones I have just written? This is so I can be sure the user is only looking at the relevant pdfs that relate to their search on my system.
Thanks.
Build a randomly named directory in the base temporary directory:
string directoryName = Path.GetRandomFileName();
Directory.CreateDirectory(Path.Combine(Path.GetTempPath(), directoryName));
Store your files in there.
I would recommend in your users ApplicationData/LocalApplicationData folder provided by the OS for your app..
Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData);
Of course if the storage doesn't need to persist very long (really temporary) then you could just use the temp folder and create a folder inside of it to isolate your files.
Could you just create a subdirectory in the Temp path?
string dir = System.IO.Path.GetTempPath() + "\\<searchstring>;
Use Path.GetTempFileName and keep track of the temporary files that you've allocated during the session. You should clean up when your program exits.
You can either:
Record the creation time of the first and last item, then only edit files which were created in that creation window
Move the files to a a holding folder, create the thumbnails and then move them to the folder that they're meant to be in, ensuring that the holding folder is empty at the end of the run
I've got a project which requires a fairly complicated process and I want to make sure I know the best way to do this. I'm using ASP.net C# with Adobe Flex 3. The app server is Mosso (cloud server) and the file storage server is Amazon S3. The existing site can be viewed at NoiseTrade.com
I need to do this:
Allow users to upload MP3 files to
an album "widget"
After the user has uploaded their
album/widget, I need to
automatically zip the mp3 (for other
users to download) and upload the
zip along with the mp3 tracks to
Amazon S3
I actually have this working already (using client side processing in Flex) but this no longer works because of Adobe's flash 10 "security" update. So now I need to implement this server-side.
The way I am thinking of doing this is:
Store the mp3 in a temporary folder
on the app server
When the artist "publishes" create a
zip of the files in that folder
using a c# library
Start the amazon S3 upload process (zip and mp3s)
and email the user when it is
finished (as well as deleting the
temporary folder)
The major problem I see with this approach is that if a user deletes or adds a track later on I'll have to update the zip file but the temporary files will not longer exist.
I'm at a loss at the best way to do this and would appreciate any advice you might have.
Thanks!
The bit about updating the zip but not having the temporary files if the user adds or removes a track leads me to suspect that you want to build zips containing multiple tracks, possibly complete albums. If this is incorrect and you're just putting a single mp3 into each zip, then StingyJack is right and you'll probably end up making the file (slightly) larger rather than smaller by zipping it.
If my interpretation is correct, then you're in luck. Command-line zip tools frequently have flags which can be used to add files to or delete files from an existing zip archive. You have not stated which library or other method you're using to do the zipping, but I expect that it probably has this capability as well.
MP3's are compressed. Why bother zipping them?
I would say it is not necessary to zip a compressed file format, you are only gong to get a five percent reduction in filesize, give or take a little. Mp3's dont really zip up by their nature the have compressed most of the possible data already.
DotNetZip can zip up files from C#/ASP.NET. I concur with the prior posters regarding compressibility of MP3s. DotNetZip will automatically skip compression on MP3, and just store the file, just for this reason. It still may be interesting to use a zip as a packaging/archive container, aside from the compression.
If you change the zip file later (user adds a track), you could grab the .zip file from S3, and just update it. DotNetZip can update zip files, too. But in this case you would have to pay for the transfer cost into and out of S3.
DotNetZip can do all of this with in-memory handling of the zips - though that may not be feasible for large archives with lots of MP3s and lots of concurrent users.