using Ionic.Zip
...
using (ZipFile zip = new ZipFile())
{
zip.AlternateEncodingUsage = ZipOption.AsNecessary;
zip.AddDirectoryByName("Files");
foreach (GridViewRow row in GridView1.Rows)
{
if ((row.FindControl("chkSelect") as CheckBox).Checked)
{
string filePath = (row.FindControl("lblFilePath") as Label).Text;
zip.AddFile(filePath, "Files");
}
}
Response.Clear();
Response.BufferOutput = false;
string zipName = String.Format("Zip_{0}.zip", DateTime.Now.ToString("yyyy-MMM-dd-HHmmss"));
Response.ContentType = "application/zip";
Response.AddHeader("content-disposition", "attachment; filename=" + zipName);
zip.Save(Response.OutputStream);
Response.End();
}
Hello! This portion of code does the downloading of a zipped directory. Let's say I have a gridview of CONTENTS of text files I want to download. Is there a way of making the program download such archieve without knowing or writing the paths to files?
The code should work this way:
1. get item from gridview
2. create a text file from the content
3. add it to the zip directory
(repeat foreach item in gridview)
n. download a zipped file
According to the documentation, you can add an entry from a Stream. So consider where you currently do this:
zip.AddFile(filePath, "Files");
Instead of adding a "file" given a path, you'd add a "file" given a stream of data.
So you can create a stream from a string:
new MemoryStream(Encoding.UTF8.GetBytes(someString)) // or whatever encoding you use
and add it to the Zip:
using (var stream = new MemoryStream(Encoding.UTF8.GetBytes(someString)))
{
zip.AddEntry(someFileName, stream);
// other code
zip.Save(Response.OutputStream);
}
One thing to note here is that your resource management and disposal (with the using blocks) might get a little tricky. This is because, according to the documentation:
The application should provide an open, readable stream; in this case it will be read during the call to Save() or one of its overloads.
What this means is that if you dispose of any of the streams before calling .Save(), it will fail when you call it. You might want to look through the documentation some more to see if there's a way to force the Zip to read the streams earlier in the process. Otherwise you're basically going to have to manage a bunch of open streams until it's time to "save" the Zip.
Edit: It looks like the documentation was right there...
In cases where a large number of streams will be added to the ZipFile, the application may wish to avoid maintaining all of the streams open simultaneously. To handle this situation, the application should use the AddEntry(String, OpenDelegate, CloseDelegate) overload.
This will be a little more complex and will require you to open/close/dispose your streams manually in your delegates. So it's up to you as you build your logic whether this is preferable to nesting your using blocks. It'll likely depend on how many streams you plan to use.
Related
The program I am working on is currently using a StreamWriter to create one or many text files in a target folder. Off StreamWriter class, I am using WriteLine and its IDisposable interface via Using directive (for implicit .Close).
I need to add an option to create one or many text files in a zip archive inside a target folder. I was going to change existing code to use streams, so it's possible to use a ZIP file as an output (planning to use DotNetZip).
I was thinking to create some GetOutputStream function and feed that into the currently existing method. This function would determine whether archive option is set, and either create plain files, or archive them. Problem is that MemoryStream, which looks like a good buffer class to use with DotNetZip, does not intersect with StreamWriter in the inheritance hierarchy.
Looks like my only option is to create some IWriteLine interface, which would implement WriteLine and IDisposable. Then branch two new child classes from StreamWriter and MemoryStream, and implement IWriteLine in them.
Is there a better solution?
The current code conceptually looks like this:
Using sw As StreamWriter = File.CreateText(fullPath)
sw.WriteLine(header)
sw.WriteLine(signature)
While dr.Read 'dr=DataReader
Dim record As String = GetDataRecord(dr)
sw.WriteLine(record)
End While
End Using
For code samples, either VB.NET or C# is fine, although this is more of a conceptual question.
EDIT: Cannot use .NET 4.5's System.IO.Compression.ZipArchive, have to stick with .NET 4.0. We still need to support clients running on Windows 2003.
Use the StreamWriter(Stream) constructor to have it write to a MemoryStream. Set the Position back to 0 so you can then save the written text to the archive with ZipFile.Save(Stream). Check the ZipIntoMemory helper method in the project's sample code for guidance.
First of all, with .NET 4.5 System.IO.Compression.ZipArchive class (see http://msdn.microsoft.com/en-us/library/system.io.compression.ziparchive.aspx) you no longer need DotNetZip at least for common zipping tasks.
It could look like this:
string filePath = "...";
//Create file.
using (FileStream fileStream = File.Create(filePath))
{
//Create archive infrastructure.
using (ZipArchive archive = new ZipArchive(fileStream, ZipArchiveMode.Create, true, Encoding.UTF8))
{
SqlDataReader sqlReader = null;
//Reading each row into a separate text file in the archive.
while(sqlReader.Read())
{
string record = sqlReader.GetString(0);
//Archive entry is a file inside archive.
ZipArchiveEntry entry = archive.CreateEntry("...", CompressionLevel.Optimal);
//Get stream to write the archive item body.
using (Stream entryStream = entry.Open())
{
//All you need here is to write data into archive item stream.
byte[] recordData = Encoding.Unicode.GetBytes(record);
MemoryStream recordStream = new MemoryStream(recordData);
recordStream.CopyTo(entryStream);
//Flush the archive item to avoid data loss on dispose.
entryStream.Flush();
}
}
}
}
I am trying to implement file download feature in asp.net application. The application would be used by say around 200 users concurrently to download various files.
It would be hosted on IIS 7. I do not want the application server to crash because of multiple requests coming concurrently.
I am assuming that by calling Context.Response.Flush() in a loop, I am flushing out all the file data that I would have read till then, so application memory usage would be kept uniform. What other optimizations can I make to the current code or what other approach should be used in a scenario like this?
The requests would be for various files and the file sizes can be anywhere between 100 KB to 10 MB.
My current code is like this:
FileStream inStr = null;
byte[] buffer = new byte[1024];
String fileName = #"C:\DwnldTest\test.doc";
long byteCount; inStr = File.OpenRead(fileName);
Response.AddHeader("content-disposition", "attachment;filename=test.doc");
while ((byteCount = inStr.Read(buffer, 0, buffer.Length)) > 0)
{
if (Context.Response.IsClientConnected)
{
Context.Response.ContentType = "application/msword";
//Context.Response.BufferOutput = true;
Context.Response.OutputStream.Write(buffer, 0, buffer.Length);
Context.Response.Flush();
}
}
You can use Response.TransmitFile to save server memory when sending files.
Response.ContentType = "application/pdf";
Response.AddHeader("content-disposition", "attachment; filename=testdoc.pdf");
Response.TransmitFile(#"e:\inet\www\docs\testdoc.pdf");
Response.End();
In your code example, you're not closing / disposing inStr. That could affect performance.
Another more simple way to do this would be to use the built in method:
WriteFile
It should already be optimized and will take care of opening / closing files for you.
Maybe you want to use FileSystemWatcher class to check if the file was modified, and read it into memory only while such change was detected. For rest of the time just return the byte array that is already stored in memory. I don't know if HttpResponse.WriteFile method is sensitive for such file modification changes, or if always reads a file from given path, but this also seems to be a good option to use, as it is served by framework out of the box.
Since you are sending an existing file to the client, consider using HttpResponse.TransmitFile (http://msdn.microsoft.com/en-us/library/12s31dhy.aspx).
Looking at the .NET code it seems that this will forward the file writing to IIS instead of reading/writing it in ASP.NET process. HttpResponse.WriteFile(string, false) and HttpResponse.Write(string) seems to do the same thing.
In order to verify that the file sending is relayed to IIS, at HttpResponse.Output property - it should be of type HttpWriter. The HttpWriter._buffers array should now contain a new element HttpFileResponseElement).
Of course, you should always investigate if caching is appropriate in your scenario and test if it is being used.
How can I read content of a text file inside a zip archive?
For example I have an archive qwe.zip, and insite it there's a file asd.txt, so how can I read contents of that file?
Is it possible to do without extracting the whole archive? Because it need to be done quick, when user clicks a item in a list, to show description of the archive (it needed for plugin system for another program). So extracting a whole archive isn't the best solution... because it might be few Mb, which will take at least few seconds or even more to extract... while only that single file need to be read.
You could use a library such as SharpZipLib or DotNetZip to unzip the file and fetch the contents of individual files contained inside. This operation could be performed in-memory and you don't need to store the files into a temporary folder.
Unzip to a temp-folder take the file and delete the temp-data
public static void Decompress(string outputDirectory, string zipFile)
{
try
{
if (!File.Exists(zipFile))
throw new FileNotFoundException("Zip file not found.", zipFile);
Package zipPackage = ZipPackage.Open(zipFile, FileMode.Open, FileAccess.Read);
foreach (PackagePart part in zipPackage.GetParts())
{
string targetFile = outputDirectory + "\\" + part.Uri.ToString().TrimStart('/');
using (Stream streamSource = part.GetStream(FileMode.Open, FileAccess.Read))
{
using (Stream streamDestination = File.OpenWrite(targetFile))
{
Byte[] arrBuffer = new byte[10000];
int iRead = streamSource.Read(arrBuffer, 0, arrBuffer.Length);
while (iRead > 0)
{
streamDestination.Write(arrBuffer, 0, iRead);
iRead = streamSource.Read(arrBuffer, 0, arrBuffer.Length);
}
}
}
}
}
catch (Exception)
{
throw;
}
}
Although late in the game and the question is already answered, in hope that this still might be useful for others who find this thread, I would like to add another solution.
Just today I encountered a similar problem when I wanted to check the contents of a ZIP file with C#. Other than NewProger I cannot use a third party library and need to stay within the out-of-the-box .NET classes.
You can use the System.IO.Packaging namespace and use the ZipPackage class. If it is not already included in the assembly, you need to add a reference to WindowsBase.dll.
It seems, however, that this class does not always work with every Zip file. Calling GetParts() may return an empty list although in the QuickWatch window you can find a property called _zipArchive that contains the correct contents.
If this is the case for you, you can use Reflection to get the contents of it.
On geissingert.com you can find a blog article ("Getting a list of files from a ZipPackage") that gives a coding example for this.
SharpZipLib or DotNetZip may still need to get/read the whole .zip file to unzip a file. Actually, there is still method could make you just extract special file from the .zip file without reading the entire .zip file but just reading small segment.
I needed to have insights into Excel files, I did it like so:
using (var zip = ZipFile.Open("ExcelWorkbookWithMacros.xlsm", ZipArchiveMode.Update))
{
var entry = zip.GetEntry("xl/_rels/workbook.xml.rels");
if (entry != null)
{
var tempFile = Path.GetTempFileName();
entry.ExtractToFile(tempFile, true);
var content = File.ReadAllText(tempFile);
[...]
}
}
ok i am downloading a file from a server and i plan to delete the file that i have downloaded on the server after it gets downloaded on the client side..
My download code is working fine but i dont know when to put the command to delete the file.
string filepath = restoredFilename.ToString();
// Create New instance of FileInfo class to get the properties of the file being downloaded
FileInfo myfile = new FileInfo(filepath);
// Checking if file exists
if (myfile.Exists)
{
// Clear the content of the response
Response.ClearContent();
// Add the file name and attachment, which will force the open/cancel/save dialog box to show, to the header
Response.AddHeader("Content-Disposition", "attachment; filename=" + myfile.Name);
//Response.AddHeader("Content-Disposition", "inline; filename=" + myfile.Name);
// Add the file size into the response header
Response.AddHeader("Content-Length", myfile.Length.ToString());
// Set the ContentType
Response.ContentType = ReturnExtension(myfile.Extension.ToLower());
//// Write the file into the response (TransmitFile is for ASP.NET 2.0. In ASP.NET 1.1 you have to use WriteFile instead)
Response.TransmitFile(myfile.FullName);
// End the response
Response.End();
}
Now i know the response.End() will stop every thing and return the value, so is there another way too do so..
I need to call a function
DeleteRestoredFileForGUI(restoredFilename);
to delete the file but dont know where to put it.. i tried putting before and after Response.End() but it does not work..
any help is appreciated... thanks
Add
Response.Flush();
DeleteRestoredFileForGUI(restoredFilename);
after the call to TransmitFile() and ditch the call to Response.End() (you don't need it).
If that does not work, then ditch TransmitFile() and go with:
Stream s = myFile.OpenRead();
int bytesRead = 0;
byte[] buffer = new byte[32 * 1024] //32k buffer
while((bytesRead = s.Read(buffer, 0, buffer.Length)) > 0 &&
Response.IsClientConnected)
{
Response.OutputStream.Write(buffer, 0, bytesRead);
Response.Flush();
}
you can't delete the file straight away as it may not have been downloaded yet. from the server side there is no easy way of telling that the file was successfully downloaded. what if an open/save dialog is opened by the browser? download won't begin until the dialog is acknowledged. (this may not be immediately and/or the dialog may be cancelled)
or, what if it is a large file and the connection is dropped before it is fully downloaded? should it be possible to attempt the download again?
the normally recommended way of dealing with your situation is to do the deletion as a separate process, after a time period which allows you to be (fairly) sure the file is no longer required and/or it can be recreated/restored if need be.
depending on your situation you could have a separate process which periodically removes/processes old files. or, if you have a low volume of traffic, you could check for and delete old files each time a new one is requested.
the identification of old files will likely be based on a file time or associated value in a darabase. either way, if there are potentially lots of files to process you are unlikely to want the overhead of checking very frequently if it is unlikely to identify a lot of files to remove.
also, be sure to way up the consequences of lots of files not being removed ASAP (is disk space really an issue?) against the side effects of possibly deleting them while still needed or creating a performance side effect by checking to zealously.
The general pattern you are following makes me wonder, are you doing this?
Create Data for Client and Save to
Disk Transmit File to Client Delete
File
If you are, you might change your system to work in memory. Since memory is managed in .Net you wouldn't have to do this manual cleanup, and depending on the size of the file this could be a good bit faster too:
Create Data for Client and Save to MemoryStream
Transmit Stream to Client
Since you set the file name in the header, you have two options:
Read the file contents into a string, delete the file, echo/print the string as the body of the message.
Rename the file something like delete-filename.xxx and then have some external process (maybe a cron job?) that goes behind and deletes any files beginning with that prefix.
I have a website that has a bunch of PDFs that are pre-created and sitting on the webserver.
I don't want to allow a user to just type in a URL and get the PDF file (ie http://MySite/MyPDFFolder/MyPDF.pdf)
I want to only allow them to be viewed when I load them and display them.
I have done something similar before. I used PDFSharp to create a PDF in memory and then load it to a page like this:
protected void Page_Load(object sender, EventArgs e)
{
try
{
MemoryStream streamDoc = BarcodeReport.GetPDFReport(ID, false);
// Set the ContentType to pdf, add a header for the length
// and write the contents of the memorystream to the response
Response.ContentType = "application/pdf";
Response.AddHeader("content-length", Convert.ToString(streamDoc.Length));
Response.BinaryWrite(streamDoc.ToArray());
//End the response
Response.End();
streamDoc.Close();
}
catch (NullReferenceException)
{
Communication.Logout();
}
}
I tried to use this code to read from a file, but could not figure out how to get a MemoryStream to read in a file.
I also need a way to say that the "/MyPDFFolder" path is non-browsable.
Thanks for any suggestions
To load a PDF file from the disk into a buffer:
byte [] buffer;
using(FileStream fileStream = new FileStream(Filename, FileMode.Open))
{
using (BinaryReader reader = new BinaryReader(fileStream))
{
buffer = reader.ReadBytes((int)reader.BaseStream.Length);
}
}
Then you can create your MemoryStream like this:
using (MemoryStream msReader = new MemoryStream(buffer, false))
{
// your code here.
}
But if you already have your data in memory, you don't need the MemoryStream. Instead do this:
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Length", buffer.Length.ToString());
Response.BinaryWrite(buffer);
//End the response
Response.End();
streamDoc.Close();
Anything that is displayed on the user's screen can be captured. You might protect your source files by using a browser-based PDF viewer, but you can't prevent the user from taking snapshots of the data.
As far as keeping the source files safe...if you simply store them in a directory that is not under your web root...that should do the trick. Or you can use an .htaccess file to restrict access to the directory.
Keltex's code works for limiting who can get to the file. If the user isn't authorized for a particular file, give them a page with an error message, otherwise use that code to relay them the PDF. The URL then won't be directly to a PDF, but rather a script, so that will give you 100% control over who is permitted to access it.
Rather than putting the PDFs in question in an accessible location and messing with the configuration to hide them, you could put them someplace in the server that isn't directly web accessible. Since you'll have code reading the file into a buffer and relaying it to the user anyway, it doesn't matter where on the server the file is located, so long as it is accessible to your code.