HttpPostedFileBase SaveAs vs. InputStream - c#

Assuming I'm just saving files to a web server.
What is the difference of saving an uploaded file using the SaveAs method versus processing the file via InputStream?
Is there a performance difference?
Can both accomplish large file size uploading?

What is the difference of saving an uploaded file using the SaveAs method versus processing the file via InputStream?
Using SaveAs will just push the file to the file system. Processing using the input stream will allow you to perform any number of tasks - save to the file system, write to another stream, etc.
Is there a performance difference?
Depends on what you do. If you're comparing SaveAs to manually saving the file using the stream, then the difference is negligible.
Can both accomplish large file size uploading?
Yes.

Related

Temporary files without saving to HDD

Here is my case:
I'm using ABCPDF to generate a HTML document from a .DOCX file that I need to show on the web.
When you export to HTML from ABCPDF you generate a HTML and a folder with support files (.css, .js, .png)
Now these HTML files may contain quite sensitive data so I immediately after generating the files, I move them to a password-protected .zip file (from which I fetch them later)
The problem is, that this leaves the files unencrypted on the HDD for a few seconds and even longer if I'm (for some reason) unable to delete them at once.
I'd like suggestions for another way of doing this. I've looked in to a ram drive, but I'm not happy with installing such drivers on my servers. (AND the RAM drive would still be accessible from the OS)
The cause of the problem here might be that ABCPDF can only export HTML as files (since its multiple files) and not as a stream.
Any ideas?
I'm using .NET 4.6.x and c#
Since all your files except the .HTML are anonymous, you can use the suggested way of writing the HTML to a stream. Only all other files will be stored to the file system.
http://www.websupergoo.com/helppdfnet/source/5-abcpdf/doc/1-methods/save.htm
When saving to a Stream the format can be indicated using a Doc.SaveOptions.FileExtension property such as ".htm" or ".xps". For HTML you must provide a sensible value for the Doc.SaveOptions.Folder property.
http://www.websupergoo.com/helppdfnet/source/5-abcpdf/xsaveoptions/2-properties/folder.htm
This property specifies the folder where to store additional data such as images and fonts. it is only used when exporting documents to HTML. It is ignored otherwise.
For a start, try using a simple MemoryStream to hold the sensitive data. If you get large files or high traffic, open an encrypted stream to a file on your system.

C# Get specific file Zip azure File Storage

I was wondering if it is possible to get a specific file in a ZIP from Azure File Storage without downloading and unzipping the whole ZIP.
The problem is that the zip file can be large (>1 gb), while the file in the zip which I need is just a few MB tops.
If it is possible, could you provide an example or link(s)?
Thank you
I was wondering if it is possible to get a specific file in a ZIP from
Azure File Storage without downloading and unzipping the whole ZIP.
No. You would need to download the entire file from storage, unzip the file and then extract the desired file. It is possible to keep the entire downloaded file in memory (in form of stream) and have a zipping library work on that stream instead of saving the entire file on the disk. But you have to read the entire file.

c# mimecontent to filestream

I am trying to download .eml files (message.mimecontent) from an Exchange Server and upload it to a SharePoint document library. One option is saving the file to local drive and then uploading it from a filestream.
Is there a way that I can convert mimecontent to a filestream without saving it as a file first ? This would make the code much nicer avoiding all intricacies of file operations, file name management etc.
Assuming you are using EWS, the mime content is actually a byte[]. See the Content property.
Without saving this byte[] to the file system, keep it in memory, and use this SPFileCollection.Add method overload to upload a file to the SharePoint site by providing the byte[] as input.

Stream dynamic zip files with resume support

Suppose, I have a list of MP3 files on my server. And I want a user to download multiple files (which he wants through any means). For, this what i want is to create the zip file dynamically and while saving it into the Output Stream using the dotnetzip or ioniczip libraries.
Well, that's not the perfect solution if the zip file got heavy in size. As, in that scenario the server doesn't support resumable downloads. So, to overcome this approach, I need to handle the zip file structure internally and provide the resume support.
So, is there any library (open source) which i can use to provide resumable dyanamic zip files stream directly to the Output Stream. Or, if possible I will be happy if someone let me know the structure of zip file specially the header content + data content.
Once a download has started, you should not alter the ZIP file anymore. Because then a resume will just result in a broken ZIP file. So make sure your dynamically created ZIP file stays available!
The issue of providing resume-functionality was solved in this article for .NET 1.1, and it is still valid and functional.

Uploading files: FileUpload.SaveAs or manually writing FileUpload.FileBytes

Using the FileUpload control, please explain the difference between the following two methods of uploading file:
1. Using the FileUpload.SaveAs() method:
fileUploadControl.SaveAs(path)
2. Writing a byte array to disk from FileUpload.FileBytes using File.WriteAllBytes():
File.WriteAllBytes(path, fileUploadControl.FileBytes);
How would these compare when uploading large files?
These both have different purposes.
SaveAs lets you save as a file directly while WriteAllBytes gives you a byte array of contents.
Your file upload control will receive the bytes only after the file has been uploaded by the client, so there will be no difference in upload speeds.
A byte array is a value-type, so if you are passing copies of that around, note that it will create copies in memory whenever you pass it to functions.
I would use FileUpload.FileBytes when I want to access the bytes directly in memory and fileUploadControl.SaveAs whenever all I want to do is write the file to disk.

Categories