I created a report using ReportViewer and I can generate it on local machine without any problem. On the other hand, I encounter an error "Access to the path 'C:\xxxxxxx.xlsx' is denied." after publishing the application to IIS. Of course it is caused by the permission problem, but in our company, in most cases there is no writing permission to any location of C drive and I think the best approach is to open the generated excel file in memory, etc. (without writing to disk). So, how can I update the method I use in order to achieve this? Any idea?
I send the report (created using ReportViewer) to this method as Stream and I open the generated report without writing to disk:
public static void StreamToProcess(Stream readStream, string fileName, string fileExtension)
{
var myFile = fileName + "_" + Path.GetRandomFileName();
var writeStream = new FileStream(String.Format("{0}\\{1}.{2}",
Environment.GetFolderPath(Environment.SpecialFolder.InternetCache), myFile,
fileExtension), FileMode.Create, FileAccess.Write);
const int length = 16384;
var buffer = new Byte[length];
var bytesRead = readStream.Read(buffer, 0, length);
while (bytesRead > 0)
{
writeStream.Write(buffer, 0, bytesRead);
bytesRead = readStream.Read(buffer, 0, length);
}
readStream.Close();
writeStream.Close();
Process.Start(Environment.GetFolderPath(Environment.SpecialFolder.
InternetCache) + "\\" + myFile + "." + fileExtension);
}
Any help would be appreciated.
Update :
I pass the report stream to this method as shown below:
StreamToProcess(reportStream, "Weekly_Report", "xlsx");
Note : reportStream is the generated report using ReportViewer.
You can write to a MemoryStream instead and return this stream to the caller. Make sure you do not close the instance of your memory stream. You can then pass this memory stream to a StreamWriter instance if you want to process the in-memory file.
public static Stream StreamToProcess(Stream readStream)
{
var myFile = fileName + "_" + Path.GetRandomFileName();
var writeStream = new MemoryStream();
const int length = 16384;
var buffer = new Byte[length];
var bytesRead = readStream.Read(buffer, 0, length);
while (bytesRead > 0)
{
writeStream.Write(buffer, 0, bytesRead);
bytesRead = readStream.Read(buffer, 0, length);
}
readStream.Close();
return writeStream;
}
Your current approach of calling Process.Start will not work on the IIS server as it will open the Excel document on the server rather than on the users computer. You will have to provide a link to the user to download the file, you could use an AJAX request for this that streams from the MemoryStream. Have a look at this post on further details on how to implement this.
Related
Problem still there while i tried below three methods.
Using Window API "URLDownloadToFile"
WebClient Method
webclient.DownloadFile(url,dest) ''With/Without credientials
HTTP WebRequest Method:
public static void Download(String strURLFileandPath, String strFileSaveFileandPath)
{
HttpWebRequest wr = (HttpWebRequest)WebRequest.Create(strURLFileandPath);
HttpWebResponse ws = (HttpWebResponse)wr.GetResponse();
Stream str = ws.GetResponseStream();
byte[] inBuf = new byte[100000];
int bytesToRead = (int) inBuf.Length;
int bytesRead = 0;
while (bytesToRead > 0)
{
int n = str.Read(inBuf, bytesRead,bytesToRead);
if (n==0)
break;
bytesRead += n;
bytesToRead -= n;
}
FileStream fstr = new FileStream(strFileSaveFileandPath, FileMode.OpenOrCreate, FileAccess.Write);
fstr.Write(inBuf, 0, bytesRead);
str.Close();
fstr.Close();
}
Still i m facing the problem, file i am able to download at my local system, but when i open that it show Corrupt pdf.
!!!!I just want to download the pdf from URL and thats my query in VB.net/C# not using response method of ASP.net.
Please help if someone face this real problem.
Thanks in Advance!!!
Your code only writes 100000 bytes of the downloaded PDF and hence every PDF that is bigger than 100000 bytes gets corrupted.
To read more bytes you have to write the contents of every buffer to the FileStream.
The following should do it:
HttpWebRequest wr = (HttpWebRequest)WebRequest.Create(strURLFileandPath);
using (HttpWebResponse ws = (HttpWebResponse)wr.GetResponse())
using (Stream str = ws.GetResponseStream())
using (FileStream fstr = new FileStream(strFileSaveFileandPath, FileMode.OpenOrCreate, FileAccess.Write))
{
byte[] inBuf = new byte[100000];
int bytesRead = 0;
while ((bytesRead = str.Read(inBuf, 0, inBuf.Length)) > 0)
fstr.Write(inBuf, 0, bytesRead);
}
(It's good coding practice to use a using on every IDisposable instead of manually closing the streams.)
I need help converting a VERY LARGE binary file (ZIP file) to a Base64String and back again. The files are too large to be loaded into memory all at once (they throw OutOfMemoryExceptions) otherwise this would be a simple task. I do not want to process the contents of the ZIP file individually, I want to process the entire ZIP file.
The problem:
I can convert the entire ZIP file (test sizes vary from 1 MB to 800 MB at present) to Base64String, but when I convert it back, it is corrupted. The new ZIP file is the correct size, it is recognized as a ZIP file by Windows and WinRAR/7-Zip, etc., and I can even look inside the ZIP file and see the contents with the correct sizes/properties, but when I attempt to extract from the ZIP file, I get: "Error: 0x80004005" which is a general error code.
I am not sure where or why the corruption is happening. I have done some investigating, and I have noticed the following:
If you have a large text file, you can convert it to Base64String incrementally without issue. If calling Convert.ToBase64String on the entire file yielded: "abcdefghijklmnopqrstuvwx", then calling it on the file in two pieces would yield: "abcdefghijkl" and "mnopqrstuvwx".
Unfortunately, if the file is a binary then the result is different. While the entire file might yield: "abcdefghijklmnopqrstuvwx", trying to process this in two pieces would yield something like: "oiweh87yakgb" and "kyckshfguywp".
Is there a way to incrementally base 64 encode a binary file while avoiding this corruption?
My code:
private void ConvertLargeFile()
{
FileStream inputStream = new FileStream("C:\\Users\\test\\Desktop\\my.zip", FileMode.Open, FileAccess.Read);
byte[] buffer = new byte[MultipleOfThree];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while(bytesRead > 0)
{
byte[] secondaryBuffer = new byte[buffer.Length];
int secondaryBufferBytesRead = bytesRead;
Array.Copy(buffer, secondaryBuffer, buffer.Length);
bool isFinalChunk = false;
Array.Clear(buffer, 0, buffer.Length);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
if(bytesRead == 0)
{
isFinalChunk = true;
buffer = new byte[secondaryBufferBytesRead];
Array.Copy(secondaryBuffer, buffer, buffer.length);
}
String base64String = Convert.ToBase64String(isFinalChunk ? buffer : secondaryBuffer);
File.AppendAllText("C:\\Users\\test\\Desktop\\Base64Zip", base64String);
}
inputStream.Dispose();
}
The decoding is more of the same. I use the size of the base64String variable above (which varies depending on the original buffer size that I test with), as the buffer size for decoding. Then, instead of Convert.ToBase64String(), I call Convert.FromBase64String() and write to a different file name/path.
EDIT:
In my haste to reduce the code (I refactored it into a new project, separate from other processing to eliminate code that isn't central to the issue) I introduced a bug. The base 64 conversion should be performed on the secondaryBuffer for all iterations save the last (Identified by isFinalChunk), when buffer should be used. I have corrected the code above.
EDIT #2:
Thank you all for your comments/feedback. After correcting the bug (see the above edit), I re-tested my code, and it is actually working now. I intend to test and implement #rene's solution as it appears to be the best, but I thought that I should let everyone know of my discovery as well.
Based on the code shown in the blog from Wiktor Zychla the following code works. This same solution is indicated in the remarks section of Convert.ToBase64String as pointed out by Ivan Stoev
// using System.Security.Cryptography
private void ConvertLargeFile()
{
//encode
var filein= #"C:\Users\test\Desktop\my.zip";
var fileout = #"C:\Users\test\Desktop\Base64Zip";
using (FileStream fs = File.Open(fileout, FileMode.Create))
using (var cs=new CryptoStream(fs, new ToBase64Transform(),
CryptoStreamMode.Write))
using(var fi =File.Open(filein, FileMode.Open))
{
fi.CopyTo(cs);
}
// the zip file is now stored in base64zip
// and decode
using (FileStream f64 = File.Open(fileout, FileMode.Open) )
using (var cs=new CryptoStream(f64, new FromBase64Transform(),
CryptoStreamMode.Read ) )
using(var fo =File.Open(filein +".orig", FileMode.Create))
{
cs.CopyTo(fo);
}
// the original file is in my.zip.orig
// use the commandlinetool
// fc my.zip my.zip.orig
// to verify that the start file and the encoded and decoded file
// are the same
}
The code uses standard classes found in System.Security.Cryptography namespace and uses a CryptoStream and the FromBase64Transform and its counterpart ToBase64Transform
You can avoid using a secondary buffer by passing offset and length to Convert.ToBase64String, like this:
private void ConvertLargeFile()
{
using (var inputStream = new FileStream("C:\\Users\\test\\Desktop\\my.zip", FileMode.Open, FileAccess.Read))
{
byte[] buffer = new byte[MultipleOfThree];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while(bytesRead > 0)
{
String base64String = Convert.ToBase64String(buffer, 0, bytesRead);
File.AppendAllText("C:\\Users\\test\\Desktop\\Base64Zip", base64String);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
}
}
}
The above should work, but I think Rene's answer is actually the better solution.
Use this code:
public void ConvertLargeFile(string source , string destination)
{
using (FileStream inputStream = new FileStream(source, FileMode.Open, FileAccess.Read))
{
int buffer_size = 30000; //or any multiple of 3
byte[] buffer = new byte[buffer_size];
int bytesRead = inputStream.Read(buffer, 0, buffer.Length);
while (bytesRead > 0)
{
byte[] buffer2 = buffer;
if(bytesRead < buffer_size)
{
buffer2 = new byte[bytesRead];
Buffer.BlockCopy(buffer, 0, buffer2, 0, bytesRead);
}
string base64String = System.Convert.ToBase64String(buffer2);
File.AppendAllText(destination, base64String);
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
}
}
}
I've got a rare case where a file cannot be read from a UNC path immediately after it was written. Here's the workflow:
plupload sends a large file in chunks to a WebAPI method
Method writes the chunks to a UNC path (a storage server). This loops until the file is completely uploaded.
After a few other operations, the same method tries to read the file again and sometimes it cannot find the file
It only seems to happen after our servers have been idle for a while. If I repeat the upload a few times, it starts to work.
I thought it might be a network configuration issue, or something to do with the file not completely closing before being read again.
Here's part of the code that writes the file (is the filestream OK in this case?)
SaveStream(stream, new FileStream(fileName, FileMode.Append, FileAccess.Write));
Here's SaveStream definition:
private static void SaveStream(Stream stream, FileStream fileStream)
{
using (var fs = fileStream)
{
var buffer = new byte[1024];
var l = stream.Read(buffer, 0, 1024);
while (l > 0)
{
fs.Write(buffer, 0, l);
l = stream.Read(buffer, 0, 1024);
}
fs.Flush();
fs.Close();
}
}
Here's the code that reads the file:
var fileInfo = new FileInfo(fileName);
var exists = fileInfo.Exists;
It's the fileInfo.Exists that is returning false.
Thank you
These kind of errors are mostly due to files not closed yet.
Try passing the fileName to SaveStream and then use it as follows:
private static void SaveStream(Stream stream, string fileName)
{
using (var fs = new FileStream(fileName, FileMode.Append, FileAccess.Write))
{
var buffer = new byte[1024];
var l = stream.Read(buffer, 0, 1024);
while (l > 0)
{
fs.Write(buffer, 0, l);
l = stream.Read(buffer, 0, 1024);
}
fs.Flush();
} // end of using will close and dispose fs properly
}
(Warning: First time on Stackoverflow) I want to be able to read in a pdf via binary but I encounter an issue when writing it back to the isolated storage. When it is written back to isolated storage and I try to open the file but I get an error message from adobe reader saying this is not a valid pdf. The file is 102 Kbytes but when I write it to isolated storage it is 108 Kbytes.
My reasoning for doing this is that I want to be able to split the pdfs. I have tried PDFsharp (doesn't open all pdf types). Here is my code:
public void pdf_split()
{
string prefix = #"/PDFread;component/";
string fn = originalFile;
StreamResourceInfo sr = Application.GetResourceStream(new Uri(prefix + fn, UriKind.Relative));
IsolatedStorageFile iStorage = IsolatedStorageFile.GetUserStoreForApplication();
using (var outputStream = iStorage.OpenFile(sFile, FileMode.CreateNew))
{
Stream resourceStream = sr.Stream;
long length = resourceStream.Length;
byte[] buffer = new byte[32];
int readCount = 0;
while (readCount < length)
{
int read = sr.Stream.Read(buffer, 0, buffer.Length);
readCount += read;
outputStream.Write(buffer, 0, read);
}
}
}
I am using the code below to upload a file. The files upload without error, but when I open a file with a .docx extension M$ says that the file is corrupted. It is able to repair the file and then open, however. I'd like to fix this so the document opens correctly.
string strExtension = Path.GetExtension(context.Request.Files[0].FileName).ToLower();
string fileName = #"C:\" + Guid.NewGuid().ToString() + strExtension;
using (FileStream fs = new FileStream(fileName, FileMode.CreateNew))
{
byte[] bytes = new byte[16 * 1024];
int bytesRead;
while ((bytesRead = context.Request.InputStream.Read(bytes, 0, bytes.Length)) > 0)
{
fs.Write(bytes, 0, bytesRead);
}
}
Thanks.
EDITED:
File saves correctly with this code:
while ((bytesRead = context.Request.Files[0].InputStream.Read(bytes, 0, bytes.Length)) > 0)
Also saves correctly with context.Request.Files[0].SaveAs(...);
It looks like you are reading the HttpRequest.InputStream. The better thing to do is check the HttpRequest.Files collection.
(Or even easier, use a FileUpload server control).
Your code is copying the raw input, which is most likely multi-part, to a file.
This doesn't answer your question exactly, but you could use HttpPostedFile.SaveAs to save the content to the desired path.
string strExtension = Path.GetExtension(context.Request.Files[0].FileName).ToLower();
string fileName = #"C:\" + Guid.NewGuid().ToString() + strExtension;
context.Request.Files[0].SaveAs(fileName); // i'll ignore the violation of the law of demeter ;)