i think i'm missing something about how HttpWebRequest works via streaming when uploading large files.
basicly, i found out that i receive timeout exception when sending large files to the server, so a post suggested to do it via Async and handle the timeout myself.
The thing is, that after debugging, i found out that "GetRequestStreamAsync" method, and writing to it does nothing at the server side, the server is called only when doing GetResponseAsync
so my question is:
- code marked as //1 - it writes the file to the request stream, but i don't see that the memory is increasing, or the server even getting any request - where does the streaming go to?
This is basicly my code:
HttpWebRequest request = RESTUtils.InitializeRequest(...);
request.AllowWriteStreamBuffering = false;
request.ContentLength = i_InputStream.Length;
request.Timeout = 5000;
using (Stream requestStream = request.GetRequestStreamWithTimeout())
{
if (requestStream != null) //1
{
// We will write the stream to the request
byte[] buffer = new byte[UPLOAD_FILE_BUFFER_SIZE];
int read = i_InputStream.Read(buffer, 0, buffer.Length);
while (read > 0)
{
requestStream.Write(buffer, 0, read);
read = i_InputStream.Read(buffer, 0, buffer.Length);
}
}
}
using (var response = request.GetResponseWithTimeout(-1))
{
using (var responseStream = response.GetResponseStream())
{
}
}
public static class WebRequestExtensions
{
public static Stream GetRequestStreamWithTimeout(
this WebRequest request,
int? millisecondsTimeout = null)
{
return AsyncToSyncWithTimeout(
request.BeginGetRequestStream,
request.EndGetRequestStream,
millisecondsTimeout ?? request.Timeout);
}
public static WebResponse GetResponseWithTimeout(
this HttpWebRequest request,
int? millisecondsTimeout = null)
{
return AsyncToSyncWithTimeout(
request.BeginGetResponse,
request.EndGetResponse,
millisecondsTimeout ?? request.Timeout);
}
private static T AsyncToSyncWithTimeout<T>(
Func<AsyncCallback, object, IAsyncResult> begin,
Func<IAsyncResult, T> end,
int millisecondsTimeout)
{
var iar = begin(null, null);
if (!iar.AsyncWaitHandle.WaitOne(millisecondsTimeout))
{
var ex = new TimeoutException();
throw new WebException(ex.Message, ex, WebExceptionStatus.Timeout, null);
}
return end(iar);
}
}
Thanks!
== Edit 9/9/15 ==
Something even weirder happens, i'm attaching breakpoint right after GetResponseAsync, then i see that the server receives the call.
after that, i'm closing the process of the client -> the server is uploading the file successfully.
this happens also if i do "Abort".
anyone knows why?
Instead of using the old-style begin/end async pattern, you should consider switching to async/await which would greatly simplify your code.
You would then set the Timeout property against the request to a large value to accommodate your waiting time; then instead of using the callback-based async code, you could just do:
var request = SomeMethodToCreateRequest();
request.Timeout = int.MaxValue; // (don't do this)
var response = await request.GetResponse();
The timeout should be respected internally, and you get to simplify your code.
Related
I'm trying to develop a .NET app which will communicate with a .NET Core Server app (which I did not develop). The main goal is to download a file. Since the client app will have a WPF gui, the whole download should happen asynchronously.
From reading the server app's API I know, that the repsonse to my request is a Base64-encoded String containing the contents of a file.
What I wanted to do, is to async send a request, take its response-stream, async read from that stream to a charArray, base64-decode, async write to file (see code below).
But Convert.FromBase64CharArray most often fails with an exception
invalid length for a base-64 char array or string
Occasionally this one succeeds but the download ends prematurely (downloadedLength < totalLength)
It seems, as if the connection was closed too early, but I'm not entirly sure whether that's true.
What I tried to resolve this issue so far:
Using streamReader.ReadToEndAsync(), decode the complete string, async write: worked, but to download 115MB around 600MB RAM were used
Make the block of { read, decode, write } async instead of async read, decode, async write: no improvement
No async at all: fails sometimes, but not as often as the async version
Use FromBase64Transform::TransformBlock instead of Convert.FromBase64CharArray: didn't finish the download in reasonable time, since inputBlockSize is set fix to 1Byte (Downloading about 115MB)
Communicating through an SSH tunnel to omit the Apache Server: Download didn't even start
Having client and server running on the same machine: seemed to work fine
Some specs:
Client: Windows 7 x64, .NET 4.6.1
Server: Ubuntu 16.04, Apache 2.4, .NET Core 2.1.4
And finally: the Code
The function that requests the file:
private async Task<WebResponse> DoGetRequestAsync(string requestString)
{
var completeRequestUrl = $"{_options.StoreUrl}/api/{requestString}";
try
{
RequestStarted?.Invoke(true);
var request = (HttpWebRequest)WebRequest.Create(completeRequestUrl);
request.ContentType = "text/plain";
request.Method = "GET";
var response = await request.GetResponseAsync();
RequestFinished?.Invoke(true);
return response;
}
catch (Exception e)
{
Console.WriteLine($"ERROR: {e.Message}");
}
return null;
}
The function that handles the reponse:
public async Task<string> DownloadPackage(string vendor, string package)
{
// declaring some vars
using (var response = await DoGetRequestAsync(requestString))
{
var totalLength = response.ContentLength;
var downloadedLength = 0;
var charBuffer = new char[4 * 1024];
try
{
using (var stream = response.GetResponseStream())
{
if (stream != null)
{
using (var reader = new StreamReader(stream))
using (var fStream = File.Create(filename))
{
while (!reader.EndOfStream)
{
var readBytes = await reader.ReadAsync(charBuffer, 0, charBuffer.Length);
var decoded = Convert.FromBase64CharArray(charBuffer, 0, readBytes);
await fStream.WriteAsync(decoded, 0, decoded.Length);
downloadedLength += readBytes;
DownloadProgress?.Invoke((float)downloadedLength / totalLength * 100.0f);
}
}
}
}
if (downloadedLength < totalLength)
{
throw new Exception($"Download failed due to a network error. Downloaded {downloadedLength} Bytes.");
}
// some follow-up stuff
return filename;
}
catch (Exception e)
{
Console.WriteLine("Error!");
Console.WriteLine(e.Message);
throw;
}
}
}
Any ideas what could cause the error?
EDIT:
Ok, I tried to implement the solution Fildor proposed. Since I do not delete the decoded contents of the secondary buffer, more memory is needed now to perform the download. But I could omit the StreamReader and read from the Stream directly. This lead to another exception:
Unable to read data from the transport connection: The connection was closed
No matter whether synchronously or asnychronously. Seems to be a proof for my first suspicion. But still I don't know how to solve this problem.
I am developing a game in which I need to retrieve data from a stream (that hasn't end).
I have a class called StreamingChannel which creates the streaming channel
public StreamingChannel (){
//stuff to set the stream
webResponse = (HttpWebResponse) webRequest.GetResponse();
responseStream = new StreamReader (webResponse.GetResponseStream (), encode);
}
and to read from it i have this method
public string Read(){
try{
string jsonText = responseStream.ReadLine();
return jsonText;
}catch(ObjectDisposedException){
return null;
}
}
I perform the reading every tot secs with an InvokeRepeating and I do that for the whole game.
It works great except that for the fact that my stream lasts for about a couple of minute. After that it throws an ObjectDisposedException.
At first I wanted to restore the connection but I didn't manage to do that without reinstantiate the whole connection. In this case the problem is that the game lags for about a seconds.
So how can I tell the StreamReader that has to leave open the channel?
ps I cannot use the constructor
public StreamReader(
Stream stream,
Encoding encoding,
bool detectEncodingFromByteOrderMarks,
int bufferSize,
bool leaveOpen)
because it has been introduced in the version 4.5 of the .NET Framework, and Unity doesn't support that.
A streaming API expects your code to pull data out of Stream pretty aggressively. You may not be able to wait for Unity to schedule your ReadLine method. I think a better model is to use a separate thread to pull data as fast as possible from the Stream and store it in a buffer. (I think this is possible in Unity.) Then you can pull the stream data out of your buffer in the standard Unity thread without worrying about the pull rate. A ConcurrentQueue would be a great buffer, but Unity doesn't support it, so I've used a locked List.
Using a separate thread also allows you to restart after failures without blocking the main game.
using System.Collections.Generic;
using System.Threading;
public class StreamingChannel
{
private readonly List<string> backgroundLinesList;
private readonly object listLock = new object();
private Thread streamReaderThread;
public StreamingChannel()
{
streamReaderThread = new Thread(this.ReadWebStream);
streamReaderThread.Start();
}
public List<string> Read()
{
if (!streamReaderThread.IsAlive)
{
streamReaderThread = new Thread(this.ReadWebStream);
streamReaderThread.Start();
}
List<string> lines = null;
lock (listLock)
{
if (backgroundLinesList != null)
{
lines = backgroundLinesList;
backgroundLinesList = null;
}
}
return lines;
}
private void ReadWebStream()
{
try
{
//stuff to set the stream
HttpWebRequest webRequest;
HttpWebResponse webResponse = (HttpWebResponse)webRequest.GetResponse();
StreamReader responseStream = new StreamReader(webResponse.GetResponseStream(), encode);
while (!responseStream.EndOfStream)
{
var line = responseStream.ReadLine()
lock (listLock)
{
if (backgroundLinesList == null)
{
backgroundLinesList = new List<string>();
}
backgroundLinesList.Add(line);
}
}
log.Debug("Stream closed");
}
catch (Exception e)
{
log.Debug("WebStream thread failure: " + e + " Stack: " + e.StackTrace);
}
}
}
Following this tutorial http://msdn.microsoft.com/en-us/library/hh221581.aspx I created an HttpWebRequest.
Producing this code for the Callback function:
private void ReadCallback(IAsyncResult result)
{
HttpWebRequest request = result.AsyncState as HttpWebRequest;
if (request != null)
{
try
{
WebResponse response = request.EndGetResponse(result);
using (StreamReader streamReader1 = new StreamReader(response.GetResponseStream()))
{
string resultString = streamReader1.ReadToEnd();
}
}
catch (WebException e)
{
return;
}
}
}
Now I got some data in the resultString, but I can't return it the normal way because of the call being async (as one can read here: AsyncCallBack - Does it have to be static / Does it have to return void?).
I can create global variables and safe the resultString global to access it from everywhere, but I don't think that this is the proper way to do something like this. The MSDN just writes the results to the console (http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest(v=vs.95).aspx), not really the thing I want to.
Is there a "best practice" or something for proceeding with results form async calls (for using them in other methods that are called later on?
Sorry if the title is not clear or correct, dont know what title should i put. Please correct if wrong.
I have this code to download images from IP camera and it can download the images.The problem is how can i do the images downloading process at the same time for all cameras if i have two or more cameras?
private void GetImage()
{
string IP1 = "example.IPcam1.com:81/snapshot.cgi;
string IP2 = "example.IPcam2.com:81/snapshot.cgi;
.
.
.
string IPn = "example.IPcamn.com:81/snapshot.cgi";
for (int i = 0; i < 10; i++)
{
string ImagePath = Server.MapPath("~\\Videos\\liveRecording2\\") + string.Format("{0}", i, i + 1) + ".jpeg";
string sourceURL = ip;
WebRequest req = (WebRequest)WebRequest.Create(sourceURL);
req.Credentials = new NetworkCredential("user", "password");
WebResponse resp = req.GetResponse();
Stream stream = resp.GetResponseStream();
Bitmap bmp = (Bitmap)Bitmap.FromStream(stream);
bmp.Save(ImagePath);
}
}
You should not run long-running code like that from an ASP.NET application. They are meant to simply respond to requests.
You should place this code in a service (Windows Services are easy), and control the service through a WCF service running inside of it.
You're also going to get into trouble because you don't have your WebResponse and Stream in using blocks.
There are several methods that will depend on how you want to report feedback to the user. It all comes down to multi-threading.
Here is one example, using the ThreadPool. Note that this is missing a bunch of error checking throughout... It is here as an example of how to use the ThreadPool, not as a robust application:
private Dictionary<String, String> _cameras = new Dictionary<String, String> {
{ "http://example.IPcam1.com:81/snapshot.cgi", "/some/path/for/image1.jpg" },
{ "http://example.IPcam2.com:81/snapshot.cgi", "/some/other/path/image2.jpg" },
};
public void DoImageDownload() {
int finished = 0;
foreach (KeyValuePair<String, String> pair in _cameras) {
ThreadPool.QueueUserWorkItem(delegate {
BeginDownload(pair.Key, pair.Value);
finished++;
});
}
while (finished < _cameras.Count) {
Thread.Sleep(1000); // sleep 1 second
}
}
private void BeginDownload(String src, String dest) {
WebRequest req = (WebRequest) WebRequest.Create(src);
req.Credentials = new NetworkCredential("username", "password");
WebResponse resp = req.GetResponse();
Stream input = resp.GetResponseStream();
using (Stream output = File.Create(dest)) {
input.CopyTo(output);
}
}
This example simply takes the work you are doing in the for loop and off-loads it to the thread pool for processing. The DoImageDownload method will return very quickly, as it is not doing much actual work.
Depending on your use case, you may need a mechanism to wait for the images to finish downloading from the caller of DoImageDownload. A common approach would be the use of event callbacks at the end of BeginDownload to notify when the download is complete. I have put a simple while loop here that will wait until the images finish... Of course, this needs error checking in case images are missing or the delegate never returns.
Be sure to add your error checking throughout... Hopefully this gives you a place to start.
I'm trying to create a collection of FTP web requests to download a collection of files.
Was working correctly doing this in a single thread but am trying to do with multiple threads now but am getting a timeout exception. I think I'm missing something pretty simple but cannot seem to work it out
Here is code:
internal static void DownloadLogFiles(IEnumerable<string> ftpFileNames, string localLogsFolder)
{
BotFinder.DeleteAllFilesFromDirectory(localLogsFolder);
var ftpWebRequests = new Collection<FtpWebRequest>();
// Create web request for each log filename
foreach (var ftpWebRequest in ftpFileNames.Select(filename => (FtpWebRequest) WebRequest.Create(filename)))
{
ftpWebRequest.Credentials = new NetworkCredential(BotFinderSettings.FtpUserId, BotFinderSettings.FtpPassword);
ftpWebRequest.KeepAlive = false;
ftpWebRequest.UseBinary = true;
ftpWebRequest.CachePolicy = NoCachePolicy;
ftpWebRequest.Method = WebRequestMethods.Ftp.DownloadFile;
ftpWebRequests.Add(ftpWebRequest);
}
var threadDoneEvents = new ManualResetEvent[ftpWebRequests.Count];
for (var x = 0; x < ftpWebRequests.Count; x++)
{
var ftpWebRequest = ftpWebRequests[x];
threadDoneEvents[x] = new ManualResetEvent(false);
var threadedFtpDownloader = new ThreadedFtpDownloader(ftpWebRequest, threadDoneEvents[x]);
ThreadPool.QueueUserWorkItem(threadedFtpDownloader.PerformFtpRequest, localLogsFolder);
}
WaitHandle.WaitAll(threadDoneEvents);
}
class ThreadedFtpDownloader
{
private ManualResetEvent threadDoneEvent;
private readonly FtpWebRequest ftpWebRequest;
/// <summary>
///
/// </summary>
public ThreadedFtpDownloader(FtpWebRequest ftpWebRequest, ManualResetEvent threadDoneEvent)
{
this.threadDoneEvent = threadDoneEvent;
this.ftpWebRequest = ftpWebRequest;
}
/// <summary>
///
/// </summary>
/// <param name="localLogsFolder">
///
/// </param>
internal void PerformFtpRequest(object localLogsFolder)
{
try
{
// TIMEOUT IS HAPPENING ON LINE BELOW
using (var response = ftpWebRequest.GetResponse())
{
using (var responseStream = response.GetResponseStream())
{
const int length = 1024*10;
var buffer = new Byte[length];
var bytesRead = responseStream.Read(buffer, 0, length);
var logFileToCreate = string.Format("{0}{1}{2}", localLogsFolder,
ftpWebRequest.RequestUri.Segments[3].Replace("/", "-"),
ftpWebRequest.RequestUri.Segments[4]);
using (var writeStream = new FileStream(logFileToCreate, FileMode.OpenOrCreate))
{
while (bytesRead > 0)
{
writeStream.Write(buffer, 0, bytesRead);
bytesRead = responseStream.Read(buffer, 0, length);
}
}
}
}
threadDoneEvent.Set();
}
catch (Exception exception)
{
BotFinder.HandleExceptionAndExit(exception);
}
}
}
It seems to be downloading the first two files (using two threads I'm assuming) but then timeout seems to occur when these complete and application tries to move onto next file.
I can confirm that the FTPWebRequest which is timing out is valid and the file exists, I think I may have an open connection or something.
Was going to post a comment but probably easier to read in an answer:
Firstly, if I set the ftpRequest.Timout property to Timeout.Infinite, the timeout issue disappears however having an infinite timeout is probably not best practice. So I'd prefer to go about solving this another way...
Debugging the code, I can see that when it gets to:
ThreadPool.QueueUserWorkItem(threadedFtpDownloader.PerformFtpRequest, localLogsFolder);
It enters into PerformFtpRequest method for each FTP web request and calls the ftpWebRequest.GetResponse() but then only progresses further for the first two requests. The rest of the requests stay active but don't go any further until the first two finish. So this basically means they are left open while waiting for other requests to complete before starting.
I think the solution to this problem would either be allowing all the requests to execute at once (ConnectionLimit property is having no effect here) or to prevent execution from calling GetResponse until it's actually ready to use the response.
Any good ideas on best way to solve this? At the moment all I can seem to think of are hacky solutions which I'd like to avoid :)
Thanks!
You should get the ServicePoint for the request and set the ConnectionLimit
ServicePoint sp = ftpRequest.ServicePoint;
sp.ConnectionLimit = 10;
The default ConnectionLimit is 2 -- that's why you're seeing that behavior.
UPDATE: See this answer for a more thorough explanation:
How to improve the Performance of FtpWebRequest?