Insufficient system resources exist to complete the requested service - c#

I have a Web Application hosted in IIS 6 on a Windows Server 2003 box and have to handle 2 large PDF files around 7-8mb, these files are read by the website from a network share and the bytes passed to a WCF service for saving elsewhere.
here is the code I use to read the Files:
public static byte[] ReadFile(string filePath)
{
int count;
int sum = 0;
byte[] buffer;
FileStream stream = new FileStream(filePath, FileMode.Open, FileAccess.Read);
try
{
int length = (int)stream.Length;
buffer = new byte[length];
while ((count = stream.Read(buffer, sum, length - sum)) > 0)
sum += count;
return buffer;
}
catch (Exception)
{
throw;
}
finally
{
stream.Close();
stream.Dispose();
}
}
An error is thrown on the stream.Read() and the error is:
Insufficient system resources exist to complete the requested service
This code works in my dev environment but as soon as I post to our production environment we get this error message.
I have seen this error has surfaced a few times searching round and the worfaround for this is to use File.Move() but we can not do this as the file needs to be passed to a WCF service method.
Is there something in IIS6 that needs to be changed to allow holding 15-20mb in memory when reading file? or is there something else that needs to be configured?
Any ideas?

See this:
Why I need to read file piece by piece to buffer?
It seems you are reading the whole file, without buffering..
buffer = new byte[length];
Best regards.

Related

How to upload files(images/videos/etc) to server using Stream on WCF REST API in C# properly?

I'm setting up a function to upload images/videos on server using WCF REST API. The files are successfully uploaded in the proper destination folder, but they end up becoming unreadable no matter what kind of file it is.
Is there something wrong within my code (especially on the FileStream-Write part) that cause that to happen? Or is it possible that the problem lies elsewhere (such as the Web.config file)?
Here's my code snippet:
public string uploadFile(Stream fileStream)
{
String fileName = System.Web.HttpContext.Current.Request.QueryString["fileName"];
String destFileName = HHCWCFApp.Properties.Settings.Default.TemporaryFilePath + fileName;
String destLink = HHCWCFApp.Properties.Settings.Default.Hyperlink + fileName;
try
{
int length = 256;
int bytesRead = 0;
Byte[] buffer = new Byte[length];
using (FileStream fs = new FileStream(destFileName, FileMode.Create))
{
do
{
bytesRead = fileStream.Read(buffer, 0, length);
fs.Write(buffer, 0, bytesRead);
}
while (bytesRead == length);
}
fileStream.Dispose();
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
if (File.Exists(destFileName))
{
return destLink;
}
else
{
return "Not Found";
}
}
What kinds of files are you uploading? larger files may fall foul of the maximum request length, this is set in the web config as below
<configuration>
<system.web>
<httpRuntime maxRequestLength="xxxx" />
</system.web>
Try a text file with only a few kb and if that works but the larger files don't then this could very well solve the issue. I've tested your code and there isn't a problem there.
also bear in mind that the IIS server you use may also have the maximum request length set which may override your value.
The default length is 4mb and you could write a function to retrieve the value so your client/calling code could do a check to see if the file it's going to pass exceeds the max size.
have a read here on Microsoft's page for a bit more info.
Edit: misread the code first time round, apologies

System.IO.Compression.ZipArchive work with Large File

I have a code that is SSIS script task to zip file written in C#.
I have problem when zipping 1gb (approxymately) file.
I try to implement this code and still get error 'System.OutOfMemoryException'
System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
at ST_4cb59661fb81431abcf503766697a1db.ScriptMain.AddFileToZipUsingStream(String sZipFile, String sFilePath, String sFileName, String sBackupFolder, String sPrefixFolder) in c:\Users\dtmp857\AppData\Local\Temp\vsta\84bef43d323b439ba25df47c365b5a29\ScriptMain.cs:line 333
at ST_4cb59661fb81431abcf503766697a1db.ScriptMain.Main() in c:\Users\dtmp857\AppData\Local\Temp\vsta\84bef43d323b439ba25df47c365b5a29\ScriptMain.cs:line 131
This is the snippet of code when zipping file:
protected bool AddFileToZipUsingStream(string sZipFile, string sFilePath, string sFileName, string sBackupFolder, string sPrefixFolder)
{
bool bIsSuccess = false;
try
{
if (File.Exists(sZipFile))
{
using (ZipArchive addFile = ZipFile.Open(sZipFile, ZipArchiveMode.Update))
{
addFile.CreateEntryFromFile(sFilePath, sFileName);
//Move File after zipping it
BackupFile(sFilePath, sBackupFolder, sPrefixFolder);
}
}
else
{
//from https://stackoverflow.com/questions/28360775/adding-large-files-to-io-compression-ziparchiveentry-throws-outofmemoryexception
using (var zipFile = ZipFile.Open(sZipFile, ZipArchiveMode.Update))
{
var zipEntry = zipFile.CreateEntry(sFileName);
using (var writer = new BinaryWriter(zipEntry.Open()))
using (FileStream fs = File.Open(sFilePath, FileMode.Open))
{
var buffer = new byte[16 * 1024];
using (var data = new BinaryReader(fs))
{
int read;
while ((read = data.Read(buffer, 0, buffer.Length)) > 0)
writer.Write(buffer, 0, read);
}
}
}
//Move File after zipping it
BackupFile(sFilePath, sBackupFolder, sPrefixFolder);
}
bIsSuccess = true;
}
catch (Exception ex)
{
throw ex;
}
return bIsSuccess;
}
What I am missing, please give me suggestion maybe tutorial or best practice handling this problem.
I know this is an old post but what can I say, it helped me sort out some stuff and still comes up as a top hit on Google.
So there is definitely something wrong with the System.IO.Compression library!
First and Foremost...
You must make sure to turn off 32-Preferred. Having this set (in my case with a build for "AnyCPU") causes so many inconsistent issues.
Now with that said, I took some demo files (several less than 500MB, one at 500MB, and one at 1GB), and created a sample program with 3 buttons that made use of the 3 methods.
Button 1 - ZipArchive.CreateFromDirectory(AbsolutePath, TargetFile);
Button 2 - ZipArchive.CreateEntryFromFile(AbsolutePath, RelativePath);
Button 3 - Using the [16 * 1024] Byte Buffer method from above
Now here is where it gets interesting. (Assuming that the program is built as "AnyCPU" and with NO 32 Preferred check)... all 3 Methods worked on a Windows 64-Bit OS, regardless of how much memory it had.
However, as soon as I ran the same test on a 32-Bit OS, regardless of how much memory it had, ONLY method 1 worked!
Method 2 and 3 blew up with the outofmemory error AND to add salt to it, method 3 (the preferred method of chunking) actually corrupted more files than method #2!
By messed up, I mean that of my files, the 500MB and the 1GB file ended up in the zipped archive but at a size less than the original (it was basically truncated).
So I dunno... since there are not many 32-bit OS around anymore, I guess maybe it is a moot point.
But seems like there are some bugs in the System.IO.Compression Framework!

Writing To A File With Multiple Streams C#

I am trying to download a large file (>1GB) from one server to another using HTTP. To do this I am making HTTP range requests in parallel. This lets me download the file in parallel.
When saving to disk I am taking each response stream, opening the same file as a file stream, seeking to the range I want and then writing.
However I find that all but one of my response streams times out. It looks like the disk I/O cannot keep up with the network I/O. However, if I do the same thing but have each thread write to a separate file it works fine.
For reference, here is my code writing to the same file:
int numberOfStreams = 4;
List<Tuple<int, int>> ranges = new List<Tuple<int, int>>();
string fileName = #"C:\MyCoolFile.txt";
//List populated here
Parallel.For(0, numberOfStreams, (index, state) =>
{
try
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create("Some URL");
using(Stream responseStream = webRequest.GetResponse().GetResponseStream())
{
using (FileStream fileStream = File.Open(fileName, FileMode.OpenOrCreate, FileAccess.Write, FileShare.Write))
{
fileStream.Seek(ranges[index].Item1, SeekOrigin.Begin);
byte[] buffer = new byte[64 * 1024];
int bytesRead;
while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
if (state.IsStopped)
{
return;
}
fileStream.Write(buffer, 0, bytesRead);
}
}
};
}
catch (Exception e)
{
exception = e;
state.Stop();
}
});
And here is the code writing to multiple files:
int numberOfStreams = 4;
List<Tuple<int, int>> ranges = new List<Tuple<int, int>>();
string fileName = #"C:\MyCoolFile.txt";
//List populated here
Parallel.For(0, numberOfStreams, (index, state) =>
{
try
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create("Some URL");
using(Stream responseStream = webRequest.GetResponse().GetResponseStream())
{
using (FileStream fileStream = File.Open(fileName + "." + index + ".tmp", FileMode.OpenOrCreate, FileAccess.Write, FileShare.Write))
{
fileStream.Seek(ranges[index].Item1, SeekOrigin.Begin);
byte[] buffer = new byte[64 * 1024];
int bytesRead;
while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
if (state.IsStopped)
{
return;
}
fileStream.Write(buffer, 0, bytesRead);
}
}
};
}
catch (Exception e)
{
exception = e;
state.Stop();
}
});
My question is this, is there some additional checks/actions that C#/Windows takes when writing to a single file from multiple threads that would cause the file I/O to be slower than when writing to multiple files? All disk operations should be bound by the disk speed right? Can anyone explain this behavior?
Thanks in advance!
UPDATE: Here is the error the source server is throwing:
"Unable to write data to the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond."
[System.IO.IOException]: "Unable to write data to the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond."
InnerException: "A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond"
Message: "Unable to write data to the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond."
StackTrace: " at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size)\r\n at System.Net.Security._SslStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)\r\n at System.Net.Security._SslStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)\r\n at System.Net.Security.SslStream.Write(Byte[] buffer, Int32 offset, Int32 count)\r\n
Unless you're writing to a striped RAID, you're unlikely to experience performance benefits by writing to the file from multiple threads concurrently. In fact, it's more likely to be the opposite – the concurrent writes would get interleaved and cause random access, incurring disk seek latencies that makes them orders of magnitude slower than large sequential writes.
To get a sense of perspective, look at some latency comparisons. A sequential 1 MB read from disk takes 20 ms; writes take approximately the same time. Each disk seek, on the other hands, takes around 10 ms. If your writes are interleaved at 4 KB chunks, then your 1 MB write will require an additional 2560 ms of seek time, making it 100 times slower than sequential.
I would suggest only allowing one thread to write to the file at any time, and use parallelism just for the network transfer. You can use a producer–consumer pattern where downloaded chunks are written to a bounded concurrent collection (such as BlockingCollection<T>), which then get picked up and written to disk by a dedicated thread.
fileStream.Seek(ranges[index].Item1, SeekOrigin.Begin);
That Seek() call is a problem, you'll seek to a part of the file that's very far removed from the current end-of-file. Your next fileStream.Write() call forces the file system to extend the file on disk, filling the unwritten parts of it with zeros.
This can take a while, your thread will be blocked until the file system is done extending the file. Might well be long enough to trigger a timeout. You'd see this go wrong early at the start of the transfer.
A workaround is to create and fill the entire file before you start writing real data. Otherwise a very common strategy used by downloaders, you might have seen .part files before. Another nice benefit is that you have a decent guarantee that the transfer cannot fail because the disk ran out of space. Beware that filling a file with zeros is only cheap when the machine has enough RAM. 1 GB should not be a problem on modern machines.
Repro code:
using System;
using System.IO;
using System.Diagnostics;
class Program {
static void Main(string[] args) {
string path = #"c:\temp\test.bin";
var fs = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Write);
fs.Seek(1024L * 1024 * 1024, SeekOrigin.Begin);
var buf = new byte[4096];
var sw = Stopwatch.StartNew();
fs.Write(buf, 0, buf.Length);
sw.Stop();
Console.WriteLine("Writing 4096 bytes took {0} milliseconds", sw.ElapsedMilliseconds);
Console.ReadKey();
fs.Close();
File.Delete(path);
}
}
Output:
Writing 4096 bytes took 1491 milliseconds
That was on an fast SSD, a spindle drive is going to take much longer.
Here's my guess from the information given so far:
On Windows, when you write to a position that extends the file size Windows needs to zero initialize everything that comes before it. This prevents old disk data to leak which would be a security problem.
Probably, all but your first thread need to zero-init so much data that the download times out. This is not really streaming anymore because the first write takes ages.
If you have the LPIM privilege you can avoid zero initialization. Otherwise you cannot for security reasons. Free Download Manager shows a message that it starts zero-initing at the start of each download.
So after trying all the suggestions I ended up using a MemoryMappedFile and openening a stream to write to the MemoryMappedFile on each thread:
int numberOfStreams = 4;
List<Tuple<int, int>> ranges = new List<Tuple<int, int>>();
string fileName = #"C:\MyCoolFile.txt";
//Ranges list populated here
using (MemoryMappedFile mmf = MemoryMappedFile.CreateFromFile(fileName, FileMode.OpenOrCreate, null, fileSize.Value, MemoryMappedFileAccess.ReadWrite))
{
Parallel.For(0, numberOfStreams, index =>
{
try
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create("Some URL");
using(Stream responseStream = webRequest.GetResponse().GetResponseStream())
{
using (MemoryMappedViewStream fileStream = mmf.CreateViewStream(ranges[index].Item1, ranges[index].Item2 - ranges[index].Item1 + 1, MemoryMappedFileAccess.Write))
{
responseStream.CopyTo(fileStream);
}
};
}
catch (Exception e)
{
exception = e;
}
});
}
System.Net.Sockets.NetworkStream.Write
The stack trace shows that the errors happens when writing to the server. It is a timeout. This can happen because of
network failure/overloading
an unresponsive server.
This is not an issue with writing to a file. Analyze the network and the server. Maybe the server is not ready for concurrent usage.
Prove this theory by disabling writing to the file. The error should remain.

Accessing files on mssql filestore through UNC path is causing delay c#

I am experiencing some strange behaviour from my code which i am using to stream files to my clients.
I have a mssql server which acts as a filestore, with files that is accessed via an UNC path.
On my webserver i have some .net code running that handles streaming the files (in this case pictures and thumbnails) to my clients.
My code works, but i am experiencing a constant delay of ~12 sec on the initial file request. When i have made the initial request it is as the server wakes up and suddenly becomes responsive only to fall back to the same behaviour some time after.
At first i thought it was my code, but from what i can see on the server activity log there is no ressource intensive code going on. My theory is that at each call to the server the path must first be mounted and that is what causes the delay. It will then unmount some time after and will have to remount.
For reference i am posting my code (maybe i just cannot see the problem):
public async static Task StreamFileAsync(HttpContext context, FileInfo fileInfo)
{
//This controls how many bytes to read at a time and send to the client
int bytesToRead = 512 * 1024; // 512KB
// Buffer to read bytes in chunk size specified above
byte[] buffer = new Byte[bytesToRead];
// Clear the current response content/headers
context.Response.Clear();
context.Response.ClearHeaders();
//Indicate the type of data being sent
context.Response.ContentType = FileTools.GetMimeType(fileInfo.Extension);
//Name the file
context.Response.AddHeader("Content-Disposition", "filename=\"" + fileInfo.Name + "\"");
context.Response.AddHeader("Content-Length", fileInfo.Length.ToString());
// Open the file
using (var stream = fileInfo.OpenRead())
{
// The number of bytes read
int length;
do
{
// Verify that the client is connected
if (context.Response.IsClientConnected)
{
// Read data into the buffer
length = await stream.ReadAsync(buffer, 0, bytesToRead);
// and write it out to the response's output stream
await context.Response.OutputStream.WriteAsync(buffer, 0, length);
try
{
// Flush the data
context.Response.Flush();
}
catch (HttpException)
{
// Cancel the download if a HttpException happens
// (ie. the client has disconnected by we tried to send some data)
length = -1;
}
//Clear the buffer
buffer = new Byte[bytesToRead];
}
else
{
// Cancel the download if client has disconnected
length = -1;
}
} while (length > 0); //Repeat until no data is read
}
// Tell the response not to send any more content to the client
context.Response.SuppressContent = true;
// Tell the application to skip to the EndRequest event in the HTTP pipeline
context.ApplicationInstance.CompleteRequest();
}
If anyone could shed some light over this problem i would be very grateful!

Transfering a File with a NetworkStream then rebuilding the file fails

I am trying to send a file over a NetworkStream and rebuild it on the client side. I can get the data over correctly (i think) but when I use either a BinaryWriter or a FileStream object to recreate the file, the file is cut off in the beginning at the same point no matter what methodology I use.
private void ReadandSaveFileFromServer(ref TcpClient clientATF,ref NetworkStream currentStream, string locationToSave)
{
int fileSize = 0;
string fileName = "";
fileName = ReadStringFromServer(ref clientATF,ref currentStream);
fileSize = ReadIntFromServer(ref clientATF,ref currentStream);
byte[] fileSent = new byte[fileSize];
if (currentStream.CanRead && clientATF.Connected)
{
currentStream.Read(fileSent, 0, fileSent.Length);
WriteToConsole("Log Recieved");
}
else
{
WriteToConsole("Log Transfer Failed");
}
FileStream fileToCreate = new FileStream(locationToSave + "\\" + fileName, FileMode.Create);
fileToCreate.Seek(0, SeekOrigin.Begin);
fileToCreate.Write(fileSent, 0, fileSent.Length);
fileToCreate.Close();
//binWriter = new BinaryWriter(File.Open(locationToSave + "\\" + fileName, FileMode.Create));
//binWriter.Write(fileSent);
//binWriter.Close();
}
When I step through and check fileName and fileSize, they are correct. The byte[] is also fully populated. Any clue as to what I can do next?
Thanks in advance...
Sean
EDIT!!!:
So I figured out what is happening. When I read a string and then the Int from stream, the byte array is 256 indices long. So my read for string is taking in the int, which then will clobber the other areas. Need to figure this out...
For one thing, you can use the convenience method File.WriteAllBytes to write the data more simply. But I doubt that that's the problem.
You're assuming you can read all the data in a single call to Read. You're ignoring the return value. Don't do that - instead, read multiple times until either you've read everything you expect to, or you've reached the end of the stream. See this article for more details. If you're using .NET 4, there's a new CopyTo method you may find useful.
(As an aside, your use of ref suggests that you don't understand what it really means. It's well worth making sure you understand how arguments are passed in C#.)
To add to Jon Skeet's answer, your reading code should be:
int bytesRead;
int readPos = 0;
do
{
bytesRead = currentStream.Read(fileSent, readPos, fileSent.Length);
readPos += bytesRead;
} while (bytesRead > 0);
If you are looking for a general solution for sending and receiving files over a network have you considered using a C# network library? It has probably solved most of the issues you will come across when trying to do this.
Disclaimer: I'm one of the developers of this library.

Categories