I have numerous byte[] representing pdf's. Each byte array needs to be loaded at the start of the application and shown as a thumbnail on my gui. So far I have managed to write the files to a temp location using:
System.IO.Path.GetTempPath();
Then using this path I get all pdf files, write them using
System.IO.File.WriteAllBytes(fileName, arrayOfPdfs[i]);
and then navigate to that directory, get all pdf files and turn them in to thumbnails in my app.
The thing is I only want the pdfs I have just put in the temp location only, so how else can I store the pdfs, or even where else can I store them so I know when I come to turn them in to thumbnails, the files I am reading are the ones I have just written? This is so I can be sure the user is only looking at the relevant pdfs that relate to their search on my system.
Thanks.
Build a randomly named directory in the base temporary directory:
string directoryName = Path.GetRandomFileName();
Directory.CreateDirectory(Path.Combine(Path.GetTempPath(), directoryName));
Store your files in there.
I would recommend in your users ApplicationData/LocalApplicationData folder provided by the OS for your app..
Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData);
Of course if the storage doesn't need to persist very long (really temporary) then you could just use the temp folder and create a folder inside of it to isolate your files.
Could you just create a subdirectory in the Temp path?
string dir = System.IO.Path.GetTempPath() + "\\<searchstring>;
Use Path.GetTempFileName and keep track of the temporary files that you've allocated during the session. You should clean up when your program exits.
You can either:
Record the creation time of the first and last item, then only edit files which were created in that creation window
Move the files to a a holding folder, create the thumbnails and then move them to the folder that they're meant to be in, ensuring that the holding folder is empty at the end of the run
Related
I have to write a program to copy pdfs from a directory and paste them in a different directory.
I have an excel file which has the mapping of which pdf file has to to be copied from the main folder to a different sub folder.
I would just like a pseudo code to help me begin with the program.
My Idea is to select all files which has to be copied to the same folder first and repeat process till no files are left to be copied.
Help me improvise on this.
I have to make program which does this process in one click.
i suggest you go with a CSV file.
for i <= FileLength do
{
Data[i] = Read.Trim(",")
MoveFile(Data[1],Data[2])
}
I have an application that downloads a file from FTP, reads the file, then deletes it (i download a temporary file because the stream gets disposed before i read the end of the data and i get an exception) and I was wondering what is the programming convention for storing temporary files? Basically right now I just download the file to the desktop directory (testing phase still) so it pops up on the desktop for a second while it's read then deleted.
Use System.IO.Path.GetTempFileName() to get a randomly named file in the system's temp directory. Download to there.
Be sure to use System.IO.File.Delete() when you're done with it!
https://msdn.microsoft.com/en-us/library/system.io.path.gettempfilename%28v=vs.110%29.aspx
I need copy files from my local hard drive to an external hard drive. My thought is, I only want to copy the files that do not currently exist. I am sure there is a much easier way to do as such, but this is where my mind went first.
My thoughts on how to accomplish this:
1) Get a list of all files on my C: drive and write to a text file
2) Get a list of all files on my L: drive (backup) and write to a text file
3) Compare C: drive text file to L: drive text file to find the files that do not exist
4) Write results of the files that do not exist to an array
5) Iterate through the newly created array and copy the files to the L: drive
Is there a more effective/time efficient way to accomplish this task?
For sure you don't want to create text files listing file names, and then compare them. That will be inefficient and clunky. The way to do this is to walk through the source directories looking for all the files. As you go, you'll be creating a matching destination path for each file. Just before you copy the file you need to decide whether or not to copy it. If a file exists at the destination path, skip copying.
Some enhancements on that might include skipping copying only if the file exists and the last modified date/time and file size match. And so on, I'm sure you can imagine variants on this.
One thing that you might not want to do is build a list of all the files first, and then start copying. It may very well be more efficient to copy files as you are iterating over the source directory. For example you could use Directory.EnumerateFiles to do this in an efficient way.
Of course, you don't need to write a program to do this. Thousands already exist, some of which are quite effective.
I would like to hold a file in a class file.
I am writing an attachment user control.
After uploading a file by upload control, I would like to hold it in a class before I upload the file in SharePoint.
User can upload more than one file.
Only after that when user click on save button, I will save all files and other data, files to SharePoint and other data to Database.
Here is my class
public class Document :
{
public string documentName, documentPath, spServerURL, spDocumentLibraryURL;
public DateTime lodegmentDate;
public System.Web.HttpPostedFile postedFile;
}
How should I handle it? Is it OK to use httpPostedFile? During my update, can I convert SPFile to httpPostedFile?
The way that we handle this is when the file is uploaded, it is saved in a well-known directory with a temporary file name based on a GUID. The temporary file name and the original, uploaded file name, are then stored in a list of FileDetails within our class. Our class is then serialized to the page's ViewState, but it could also be stored in session state (I wouldn't recommend this in case your users open multiple pages or sign on to multiple computers with the same login).
When the save button is pressed, we loop through the list of FileDetails in our class, retrieve each one from the temporary directory, and send it to sharepoint (or wherever it needs to go).
We also bind the uploaded files to a grid so that the user can see the list of uploaded files and, if they want to remove one, they can check a box in the grid (deleted) that we check before processing the files.
Note that this process can also support automatic unzipping of zipped files: if you detect that the uploaded file is zipped, you can unzip each of the files to the temporary directory and add an entry for each one to list of files in the class. This could be a big time saver for your users.
Keeping documents in memory is not an efficient use of webserver ram.
Much better to store it in either a temporary database or fileshare...
However, I would not recommend writing this yourself if you can avoid it.
Try looking at the RadUpload control from Telerik, they even give you a 60-day trial.
I am currently learning C# during my studies and I am writing a small movie database application in it. I know its not good to save pictures (etc) inside the database, especially when adding movie covers that are rather big. But I don't want the files to just get saved in a folder as this creates a mess if more and more movies are added to the list.
Does anyone know a way to store the files in some sort of a container file (like covers.xxx). Then that container contains all covers in one big file, the files can then be retrieved with an address or the name.
Thanks :)
http://dotnetzip.codeplex.com/
Use above library and following code snippet.
using (ZipFile zip = new ZipFile())
{
// add this map file into the "images" directory in the zip archive
zip.AddFile("c:\\images\\personal\\7440-N49th.png", "images");
// add the report into a different directory in the archive
zip.AddFile("c:\\Reports\\2008-Regional-Sales-Report.pdf", "files");
zip.AddFile("ReadMe.txt");
zip.Save("MyZipFile.zip");
}
I can't see why storing the files as binary in the db is necessarily a bad idea.
However if it's definitely out then it sounds like you want a way to store a non-compressed compilation of files - basically a .zip that isn't compressed. You could achieve this for yourself by creating a file that is simply the data of the files you want appended together with some sort of unique header string in between them that you can split on when you read from the file. Ultimately this is simulating a basic file DB so I'm not sure what you'd accomplish but it's an idea.