The specified argument is outside the range of valid values - C# - c#

I keep getting this error:
The specified argument is outside the range of valid values.
When I run this code in C#:
string sourceURL = "http://192.168.1.253/nphMotionJpeg?Resolution=320x240&Quality=Standard";
byte[] buffer = new byte[200000];
int read, total = 0;
// create HTTP request
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(sourceURL);
req.Credentials = new NetworkCredential("username", "password");
// get response
WebResponse resp = req.GetResponse();
// get response stream
// Make sure the stream gets closed once we're done with it
using (Stream stream = resp.GetResponseStream())
{
// A larger buffer size would be benefitial, but it's not going
// to make a significant difference.
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}
}
// get bitmap
Bitmap bmp = (Bitmap)Bitmap.FromStream(new MemoryStream(buffer, 0, total));
pictureBox1.Image = bmp;
This line:
while ((read = stream.Read(buffer, total, 1000)) != 0)
Does anybody know what could cause this error or how to fix it?
Thanks in advance

Does anybody know what could cause this error?
I suspect total (or rather, total + 1000) has gone outside the range of the array - you'll get this error if you try to read more than 200K of data.
Personally I'd approach it differently - I'd create a MemoryStream to write to, and a much smaller buffer to read into, always reading as much data as you can, at the start of the buffer - and then copying that many bytes into the stream. Then just rewind the stream (set Position to 0) before loading it as a bitmap.
Or just use Stream.CopyTo if you're using .NET 4 or higher:
Stream output = new MemoryStream();
using (Stream input = resp.GetResponseStream())
{
input.CopyTo(output);
}
output.Position = 0;
Bitmap bmp = (Bitmap) Bitmap.FromStream(output);

Related

Request stream fail to write

I have to upload a large file to the server with the following code snippet:
static async Task LordNoBugAsync(string token, string filePath, string uri)
{
HttpWebRequest fileWebRequest = (HttpWebRequest)WebRequest.Create(uri);
fileWebRequest.Method = "PATCH";
fileWebRequest.AllowWriteStreamBuffering = false; //this line tells to upload by chunks
fileWebRequest.ContentType = "application/x-www-form-urlencoded";
fileWebRequest.Headers["Authorization"] = "PHOENIX-TOKEN " + token;
fileWebRequest.KeepAlive = false;
fileWebRequest.Timeout = System.Threading.Timeout.Infinite;
fileWebRequest.Proxy = null;
using (FileStream fileStream = File.OpenRead(filePath) )
{
fileWebRequest.ContentLength = fileStream.Length; //have to provide length in order to upload by chunks
int bufferSize = 512000;
byte[] buffer = new byte[bufferSize];
int lastBytesRead = 0;
int byteCount = 0;
Stream requestStream = fileWebRequest.GetRequestStream();
requestStream.WriteTimeout = System.Threading.Timeout.Infinite;
while ((lastBytesRead = fileStream.Read(buffer, 0, bufferSize)) != 0)
{
if (lastBytesRead > 0)
{
await requestStream.WriteAsync(buffer, 0, lastBytesRead);
//for some reasons didnt really write to stream, but in fact buffer has content, >60MB
byteCount += bufferSize;
}
}
requestStream.Flush();
try
{
requestStream.Close();
requestStream.Dispose();
}
catch
{
Console.Write("Error");
}
try
{
fileStream.Close();
fileStream.Dispose();
}
catch
{
Console.Write("Error");
}
}
...getting response parts...
}
In the code, I made a HttpWebRequest and push the content to server with buffering. The code works perfectly for any files under 60MB.
I tried a 70MB pdf. The buffer array has different content for each buffering. Yet, the request stream does not seem to be getting written. The bytecount also reached 70M, showing the file is properly read.
Edit (more info): I set the break point at requestStream.Close(). It clearly takes ~2 mins for the request stream to write in 60MB files but only takes 2ms for 70MB files.
My calling:
Task magic = LordNoBugAsync(token, nameofFile, path);
magic.Wait();
I am sure my calling is correct (it works for 0B to 60MB files).
Any advice or suggestion is much appreciated.

Download speed test

I am writing an app in C# to measure and display download speed. I have the following code to download a 62MB file in chunks, which seems to work well for my purposes. I plan to extend this to measure the time required for each chunk, so it can be graphed.
Before doing so, I have a few questions to make sure this is actually doing what I think it is doing. Here is the code:
private void DownloadFile()
{
string uri = ConfigurationManager.AppSettings["DownloadFile"].ToString();
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(new Uri(uri));
int intChunkSize = 1048576; // 1 MB chunks
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
byte[] buffer = new byte[intChunkSize];
int intStatusCode = (int)response.StatusCode;
if (intStatusCode >= 200 && intStatusCode <= 299) // success
{
Stream sourceStream = response.GetResponseStream();
MemoryStream memStream = new MemoryStream();
int intBytesRead;
bool finished = false;
while (!finished)
{
intBytesRead= sourceStream.Read(buffer, 0, intChunkSize);
if (intBytesRead > 0)
{
memStream.Write(buffer, 0, intBytesRead);
// gather timing info here
}
else
{
finished = true;
}
}
}
}
}
The questions:
Does response contain all the data when it is instantiated, or just the header info? response.ContentLength does reflect the correct value.
Even though I am using a 1 MB chunk size, the actual bytes read (intBytesRead) in each iteration is much less, typically 16384 bytes (16 KB), but sometimes 1024 (1 KB). Why is this?
Is there any way to force it to actually read 1 MB chunks?
Does it serve any purpose here to actually write the data to the MemoryStream?
Thanks.
Dan

Unhandled System.ArgumentException.. Additional Information: Parameter is not valid

I have this code in c# that pulls images from database and shows them in PictureBox. Whenever I run code first, I get this error saying "An unhandled exception of type 'System.ArgumentException' occurred in System.Drawing.dll Additional information: Parameter is not valid." But if I terminate and rerun the program, it works just fine giving intended results. Here is part of the code that is giving me trouble:
private void buttonGetImage_Click(object sender, EventArgs e)
{
string baseUrl = "http://someurl";
HttpWebRequest request = null;
foreach (var fileName in fileNames)
{
string url = string.Format(baseUrl, fileName);
MessageBoxButtons buttons = MessageBoxButtons.OKCancel;
DialogResult result;
result = MessageBox.Show(url, fileName, buttons);
if (result == System.Windows.Forms.DialogResult.Cancel)
{
this.Close();
}
request = (HttpWebRequest)WebRequest.Create(url);
request.Method = "GET";
request.ContentType = "application/x-www-form-urlencoded";
request.CookieContainer = container;
response = (HttpWebResponse)request.GetResponse();
Stream stream = response.GetResponseStream();
byte[] buffer = new byte[10000000];
int read, total = 0;
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}
MemoryStream ms = new MemoryStream(buffer, 0, total);
ms.Seek(0, SeekOrigin.Current);
Bitmap bmp = (Bitmap)Bitmap.FromStream(ms);
pictureBoxTabTwo.Image = bmp;
this.pictureBoxTabTwo.SizeMode = PictureBoxSizeMode.Zoom;
pictureBoxTabTwo.Image.Save("FormTwo.jpg", System.Drawing.Imaging.ImageFormat.Jpeg);
}
}
Can someone help me to figure out what can be done?
Error is showing me line --> Bitmap bmp = (Bitmap)Bitmap.FromStream(ms);
Instead of using Bitmap class I used Image class in my program. What I was doing here was taking a stream and putting it into a byte array. And again converting content of that array back to stream. Instead, I used
Image img = Image.FromStream(stream)
In this case You don't even have to use MemoryStream. It is working perfectly find for me now.
Hard to tell from your code what the problem could be, but likely the query you are making on the server returns an unexpected response in some circumstances. You'd be best to get a snapshot of the returned stream when things goes wrong. It will allow you to diagnose the problem and take appropriate measures.
You need to do proper object disposal. Otherwise, the underlying connection doesn't close until the garbage collector catches up with the object and that can cause problems. Also, in .NET 4.0 and higher you can use the CopyTo method on Streams.
request = (HttpWebRequest)WebRequest.Create(url);
request.Method = "GET";
request.ContentType = "application/x-www-form-urlencoded";
request.CookieContainer = container;
using (response = (HttpWebResponse)request.GetResponse())
using (Stream stream = response.GetResponseStream())
{
// All of this code is unnecessary if using .NET 4.0 or higher.
/*
byte[] buffer = new byte[10000000];
int read, total = 0;
while ((read = stream.Read(buffer, total, 1000)) != 0)
{
total += read;
}
MemoryStream ms = new MemoryStream(buffer, 0, total);
ms.Seek(0, SeekOrigin.Current);
*/
// Instead use the following
MemoryStream ms = new MemoryStream();
stream.CopyTo(ms);
Bitmap bmp = (Bitmap)Bitmap.FromStream(ms);
pictureBoxTabTwo.Image = bmp;
this.pictureBoxTabTwo.SizeMode = PictureBoxSizeMode.Zoom;
pictureBoxTabTwo.Image.Save("FormTwo.jpg", System.Drawing.Imaging.ImageFormat.Jpeg);
}
You can try it
pictureBox1.Image = new Bitmap(sourceBitmap);

Serving large files with C# HttpListener

I'm trying to use HttpListener to serve static files, and this works well with small files. When file sizes grow larger (tested with 350 and 600MB files), the server chokes with one of the following exceptions:
HttpListenerException: The I/O operation has been aborted because of either a thread exit or an application request, or:
HttpListenerException: The semaphore timeout period has expired.
What needs to be changed to get rid of the exceptions, and let it run stable/reliable (and fast)?
Here's some further elaboration: This is basically a follow-up question to this earlier question. The code is slightly extended to show the effect. Content writing is in a loop with (hopefully reasonable) chunk sizes, 64kB in my case, but changing the value didn't make a difference except speed (see the mentioned older question).
using( FileStream fs = File.OpenRead( #"C:\test\largefile.exe" ) ) {
//response is HttpListenerContext.Response...
response.ContentLength64 = fs.Length;
response.SendChunked = false;
response.ContentType = System.Net.Mime.MediaTypeNames.Application.Octet;
response.AddHeader( "Content-disposition", "attachment; filename=largefile.EXE" );
byte[] buffer = new byte[ 64 * 1024 ];
int read;
using( BinaryWriter bw = new BinaryWriter( response.OutputStream ) ) {
while( ( read = fs.Read( buffer, 0, buffer.Length ) ) > 0 ) {
Thread.Sleep( 200 ); //take this out and it will not run
bw.Write( buffer, 0, read );
bw.Flush(); //seems to have no effect
}
bw.Close();
}
response.StatusCode = ( int )HttpStatusCode.OK;
response.StatusDescription = "OK";
response.OutputStream.Close();
}
I'm trying the download in a browser and also in a C# program using HttpWebRequest, it makes no difference.
Based on my research, I suppose that HttpListener is not really able to flush contents to the client or at least does so at its own pace. I have also left out the BinaryWriter and wrote directly to the stream - no difference. Introduced a BufferedStream around the base stream - no difference. Funny enough, if a Thread.Sleep(200) or slightly larger is introduced in the loop, it works on my box. However I doubt it is stable enough for a real solution. This question gives the impression that there's no chance at all to get it running correctly (besides moving to IIS/ASP.NET which I would resort to, but more likely stay away from if possible).
You didn't show us the other critical part how you initialized HttpListener. Therefore I tried your code with the one below and it worked
HttpListener listener = new HttpListener();
listener.Prefixes.Add("http://*:8080/");
listener.Start();
Task.Factory.StartNew(() =>
{
while (true)
{
HttpListenerContext context = listener.GetContext();
Task.Factory.StartNew((ctx) =>
{
WriteFile((HttpListenerContext)ctx, #"C:\LargeFile.zip");
}, context,TaskCreationOptions.LongRunning);
}
},TaskCreationOptions.LongRunning);
WriteFile is your code where Thread.Sleep( 200 ); is removed.
If you want to see the full code of it.
void WriteFile(HttpListenerContext ctx, string path)
{
var response = ctx.Response;
using (FileStream fs = File.OpenRead(path))
{
string filename = Path.GetFileName(path);
//response is HttpListenerContext.Response...
response.ContentLength64 = fs.Length;
response.SendChunked = false;
response.ContentType = System.Net.Mime.MediaTypeNames.Application.Octet;
response.AddHeader("Content-disposition", "attachment; filename=" + filename);
byte[] buffer = new byte[64 * 1024];
int read;
using (BinaryWriter bw = new BinaryWriter(response.OutputStream))
{
while ((read = fs.Read(buffer, 0, buffer.Length)) > 0)
{
bw.Write(buffer, 0, read);
bw.Flush(); //seems to have no effect
}
bw.Close();
}
response.StatusCode = (int)HttpStatusCode.OK;
response.StatusDescription = "OK";
response.OutputStream.Close();
}
}
Here is my SendFile function
void SendFile(Stream output, string fileName)
{
// The parameter output is the HttpListnerResponse.OutputStream and FileName is the name of the file you want to send
FileStream file = new FileStream(fileName, FileMode.Open, FileAccess.Read);
file.CopyTo(output);
output.Close();
file.Close();
}

Download the first 1000 bytes of a file using C# [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Download the first 1000 bytes
I need to download a text file from the internet using C#. The file size can be quiet large and the information I need is always within the first 1000 bytes.
This is what I have so far. I found out that the server might ignore the range header. Is there a way to limit streamreader to only read the first 1000 characters?
string GetWebPageContent(string url)
{
string result = string.Empty;
HttpWebRequest request;
const int bytesToGet = 1000;
request = WebRequest.Create(url) as HttpWebRequest;
//get first 1000 bytes
request.AddRange(0, bytesToGet - 1);
// the following code is alternative, you may implement the function after your needs
using (WebResponse response = request.GetResponse())
{
using (StreamReader sr = new StreamReader(response.GetResponseStream()))
{
result = sr.ReadToEnd();
}
}
return result;
}
Please follow-up in your question from yesterday!
There is a read method that you can specify the number of characters to read.
You can retrieve the first 1000 bytes from the stream, then decode the string from the bytes:
using (WebResponse response = request.GetResponse())
{
using (Stream stream = response.GetResponseStream())
{
byte[] bytes = new byte[bytesToGet];
int count = stream.Read(bytes, 0, bytesToGet);
Encoding encoding = Encoding.GetEncoding(response.Encoding);
result = encoding.GetString(bytes, 0, count);
}
}
Instead of using request.AddRange() which may be ignored by some servers as you said, read 1000 bytes (1 KB = 1024 bytes) from stream and then close it. This is like you get disconnected from server after receiving 1000 bytes. Code:
int count = 0;
int result = 0;
byte[] buffer = new byte[1000];
// create stream from URL as you did above
do
{
// we want to read 1000 bytes but stream may read less. result = bytes read
result = stream.Read(buffer, 0, 1000); // Use try around this for error handling
count += result;
} while ((count < 1000) && (result != 0));
stream.Dispose();
// now buffer has the first 1000 bytes of your request

Categories