HTTP Error 401.3 on created .gz file - c#

I have method to compress file with GZip:
public static void CompressFile(string filePath)
{
string compressedFilePath = Path.GetTempFileName();
using (FileStream compressedFileStream = new FileStream(compressedFilePath, FileMode.Append, FileSystemRights.Write, FileShare.Write, BufferSize, FileOptions.None))
{
GZipStream gzipStream = new GZipStream(compressedFileStream, CompressionMode.Compress);
using (FileStream uncompressedFileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Read))
{
int offset = 0;
while (true)
{
byte[] buffer = new byte[offset + BufferSize];
int bytesRead = uncompressedFileStream.Read(buffer, offset, BufferSize);
if (bytesRead == 0)
break;
gzipStream.Write(buffer, offset, bytesRead);
offset += bytesRead;
}
}
gzipStream.Close();
}
File.Delete(filePath);
File.Move(compressedFilePath, filePath);
}
My problem is, that on testing server (Win08 R2) it creates file and it can be downloaded via browser, but on webhosting server (older Win08 R1) it also creates file, but if i want to download it, access denied exception is thrown.
Differences are in file permission. On R2 server has access to file Application pool identity (e.g. "MyWebSite"), but on R1 only IIS_IUSRS with "Special permission".

Ensure you have a MIME type added for the .gz extension in IIS configuration. I think this may cause the issue you are referring to.

Related

In FileStream.ReadAsync we still have to use split logic?

My Current Project download manger tool have code like this(For downloading large files from file share, on shcedule) :
using (var destinationFileStream = new FileStream(sourceFilename, FileMode.OpenOrCreate, FileAccess.Write))
{
using (var file = new FileStream(filename, FileMode.Open, FileAccess.Read, FileShare.Read, buffer.Length, false))
{
while (true)
{
if (file.Read(buffer, 0, buffer.Length) <= 0)
break;
destinationFileStream.Write(buffer, 0, buffer.Length);
....
... Bandwidth throttling Code
....
}
}
}
if i convert this to FileStream.ReadAsync code , still i have to use split file to buffer size logic and download or this is handled internally by .net ?

How to read stream without writing to disk?

I created a report using ReportViewer and I can generate it on local machine without any problem. On the other hand, I encounter an error "Access to the path 'C:\xxxxxxx.xlsx' is denied." after publishing the application to IIS. Of course it is caused by the permission problem, but in our company, in most cases there is no writing permission to any location of C drive and I think the best approach is to open the generated excel file in memory, etc. (without writing to disk). So, how can I update the method I use in order to achieve this? Any idea?
I send the report (created using ReportViewer) to this method as Stream and I open the generated report without writing to disk:
public static void StreamToProcess(Stream readStream, string fileName, string fileExtension)
{
var myFile = fileName + "_" + Path.GetRandomFileName();
var writeStream = new FileStream(String.Format("{0}\\{1}.{2}",
Environment.GetFolderPath(Environment.SpecialFolder.InternetCache), myFile,
fileExtension), FileMode.Create, FileAccess.Write);
const int length = 16384;
var buffer = new Byte[length];
var bytesRead = readStream.Read(buffer, 0, length);
while (bytesRead > 0)
{
writeStream.Write(buffer, 0, bytesRead);
bytesRead = readStream.Read(buffer, 0, length);
}
readStream.Close();
writeStream.Close();
Process.Start(Environment.GetFolderPath(Environment.SpecialFolder.
InternetCache) + "\\" + myFile + "." + fileExtension);
}
Any help would be appreciated.
Update :
I pass the report stream to this method as shown below:
StreamToProcess(reportStream, "Weekly_Report", "xlsx");
Note : reportStream is the generated report using ReportViewer.
You can write to a MemoryStream instead and return this stream to the caller. Make sure you do not close the instance of your memory stream. You can then pass this memory stream to a StreamWriter instance if you want to process the in-memory file.
public static Stream StreamToProcess(Stream readStream)
{
var myFile = fileName + "_" + Path.GetRandomFileName();
var writeStream = new MemoryStream();
const int length = 16384;
var buffer = new Byte[length];
var bytesRead = readStream.Read(buffer, 0, length);
while (bytesRead > 0)
{
writeStream.Write(buffer, 0, bytesRead);
bytesRead = readStream.Read(buffer, 0, length);
}
readStream.Close();
return writeStream;
}
Your current approach of calling Process.Start will not work on the IIS server as it will open the Excel document on the server rather than on the users computer. You will have to provide a link to the user to download the file, you could use an AJAX request for this that streams from the MemoryStream. Have a look at this post on further details on how to implement this.

C# write an uploaded file to a UNC with FileStream, read it later sometimes doesn't work

I've got a rare case where a file cannot be read from a UNC path immediately after it was written. Here's the workflow:
plupload sends a large file in chunks to a WebAPI method
Method writes the chunks to a UNC path (a storage server). This loops until the file is completely uploaded.
After a few other operations, the same method tries to read the file again and sometimes it cannot find the file
It only seems to happen after our servers have been idle for a while. If I repeat the upload a few times, it starts to work.
I thought it might be a network configuration issue, or something to do with the file not completely closing before being read again.
Here's part of the code that writes the file (is the filestream OK in this case?)
SaveStream(stream, new FileStream(fileName, FileMode.Append, FileAccess.Write));
Here's SaveStream definition:
private static void SaveStream(Stream stream, FileStream fileStream)
{
using (var fs = fileStream)
{
var buffer = new byte[1024];
var l = stream.Read(buffer, 0, 1024);
while (l > 0)
{
fs.Write(buffer, 0, l);
l = stream.Read(buffer, 0, 1024);
}
fs.Flush();
fs.Close();
}
}
Here's the code that reads the file:
var fileInfo = new FileInfo(fileName);
var exists = fileInfo.Exists;
It's the fileInfo.Exists that is returning false.
Thank you
These kind of errors are mostly due to files not closed yet.
Try passing the fileName to SaveStream and then use it as follows:
private static void SaveStream(Stream stream, string fileName)
{
using (var fs = new FileStream(fileName, FileMode.Append, FileAccess.Write))
{
var buffer = new byte[1024];
var l = stream.Read(buffer, 0, 1024);
while (l > 0)
{
fs.Write(buffer, 0, l);
l = stream.Read(buffer, 0, 1024);
}
fs.Flush();
} // end of using will close and dispose fs properly
}

Windows Phone reading in a pdf using binary reader

(Warning: First time on Stackoverflow) I want to be able to read in a pdf via binary but I encounter an issue when writing it back to the isolated storage. When it is written back to isolated storage and I try to open the file but I get an error message from adobe reader saying this is not a valid pdf. The file is 102 Kbytes but when I write it to isolated storage it is 108 Kbytes.
My reasoning for doing this is that I want to be able to split the pdfs. I have tried PDFsharp (doesn't open all pdf types). Here is my code:
public void pdf_split()
{
string prefix = #"/PDFread;component/";
string fn = originalFile;
StreamResourceInfo sr = Application.GetResourceStream(new Uri(prefix + fn, UriKind.Relative));
IsolatedStorageFile iStorage = IsolatedStorageFile.GetUserStoreForApplication();
using (var outputStream = iStorage.OpenFile(sFile, FileMode.CreateNew))
{
Stream resourceStream = sr.Stream;
long length = resourceStream.Length;
byte[] buffer = new byte[32];
int readCount = 0;
while (readCount < length)
{
int read = sr.Stream.Read(buffer, 0, buffer.Length);
readCount += read;
outputStream.Write(buffer, 0, read);
}
}
}

Data at the root level is invalid. Line 1, position 1

I am downloading a xml file from the internet and save it in isolated storage. If I try to read it I get an error:
Data at the root level is invalid. Line 1, position 1.
string tempUrl = "http://xxxxx.myfile.xml"; // changed
WebClient client = new WebClient();
client.OpenReadAsync(new Uri(tempUrl));
client.OpenReadCompleted += new OpenReadCompletedEventHandler(delegate(object sender, OpenReadCompletedEventArgs e) {
StreamWriter writer = new StreamWriter(new IsolatedStorageFileStream("myfile.xml", FileMode.Create, FileAccess.Write, myIsolatedStorage));
writer.WriteLine(e.Result);
writer.Close();
});
This is how I download and save the file...
And I try to read it like that:
IsolatedStorageFileStream fileStream = myIsolatedStorage.OpenFile("myfile.xml", FileMode.Open, FileAccess.Read);
XDocument xmlDoc = XDocument.Load(fileStream);
This is where I get the error...
I have no problem reading the same file without downloading and saving it to isolated storage... so there must be the fault.
This:
writer.WriteLine(e.Result);
doesn't do what you think it does. It's just calling ToString() on a Stream, and writing the result to a file.
I suggest you avoid using a StreamWriter completely, and simply copy from e.Result straight to the IsolatedStorageFileStream:
using (var output = new IsolatedStorageFileStream("myfile.xml", FileMode.Create,
FileAccess.Write, myIsolatedStorage))
{
CopyStream(e.Result, output);
}
where CopyStream would be a method to just copy the data, e.g.
public static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[8 * 1024];
int read;
while((read = input.Read (buffer, 0, buffer.Length)) > 0)
{
output.Write (buffer, 0, read);
}
}

Categories