I'm trying to develop a .NET app which will communicate with a .NET Core Server app (which I did not develop). The main goal is to download a file. Since the client app will have a WPF gui, the whole download should happen asynchronously.
From reading the server app's API I know, that the repsonse to my request is a Base64-encoded String containing the contents of a file.
What I wanted to do, is to async send a request, take its response-stream, async read from that stream to a charArray, base64-decode, async write to file (see code below).
But Convert.FromBase64CharArray most often fails with an exception
invalid length for a base-64 char array or string
Occasionally this one succeeds but the download ends prematurely (downloadedLength < totalLength)
It seems, as if the connection was closed too early, but I'm not entirly sure whether that's true.
What I tried to resolve this issue so far:
Using streamReader.ReadToEndAsync(), decode the complete string, async write: worked, but to download 115MB around 600MB RAM were used
Make the block of { read, decode, write } async instead of async read, decode, async write: no improvement
No async at all: fails sometimes, but not as often as the async version
Use FromBase64Transform::TransformBlock instead of Convert.FromBase64CharArray: didn't finish the download in reasonable time, since inputBlockSize is set fix to 1Byte (Downloading about 115MB)
Communicating through an SSH tunnel to omit the Apache Server: Download didn't even start
Having client and server running on the same machine: seemed to work fine
Some specs:
Client: Windows 7 x64, .NET 4.6.1
Server: Ubuntu 16.04, Apache 2.4, .NET Core 2.1.4
And finally: the Code
The function that requests the file:
private async Task<WebResponse> DoGetRequestAsync(string requestString)
{
var completeRequestUrl = $"{_options.StoreUrl}/api/{requestString}";
try
{
RequestStarted?.Invoke(true);
var request = (HttpWebRequest)WebRequest.Create(completeRequestUrl);
request.ContentType = "text/plain";
request.Method = "GET";
var response = await request.GetResponseAsync();
RequestFinished?.Invoke(true);
return response;
}
catch (Exception e)
{
Console.WriteLine($"ERROR: {e.Message}");
}
return null;
}
The function that handles the reponse:
public async Task<string> DownloadPackage(string vendor, string package)
{
// declaring some vars
using (var response = await DoGetRequestAsync(requestString))
{
var totalLength = response.ContentLength;
var downloadedLength = 0;
var charBuffer = new char[4 * 1024];
try
{
using (var stream = response.GetResponseStream())
{
if (stream != null)
{
using (var reader = new StreamReader(stream))
using (var fStream = File.Create(filename))
{
while (!reader.EndOfStream)
{
var readBytes = await reader.ReadAsync(charBuffer, 0, charBuffer.Length);
var decoded = Convert.FromBase64CharArray(charBuffer, 0, readBytes);
await fStream.WriteAsync(decoded, 0, decoded.Length);
downloadedLength += readBytes;
DownloadProgress?.Invoke((float)downloadedLength / totalLength * 100.0f);
}
}
}
}
if (downloadedLength < totalLength)
{
throw new Exception($"Download failed due to a network error. Downloaded {downloadedLength} Bytes.");
}
// some follow-up stuff
return filename;
}
catch (Exception e)
{
Console.WriteLine("Error!");
Console.WriteLine(e.Message);
throw;
}
}
}
Any ideas what could cause the error?
EDIT:
Ok, I tried to implement the solution Fildor proposed. Since I do not delete the decoded contents of the secondary buffer, more memory is needed now to perform the download. But I could omit the StreamReader and read from the Stream directly. This lead to another exception:
Unable to read data from the transport connection: The connection was closed
No matter whether synchronously or asnychronously. Seems to be a proof for my first suspicion. But still I don't know how to solve this problem.
Related
I am using Blazor Webssembly and .Net 5.0. I need to be able to upload very large files (2-5GB) to Azure Blob Storage using chunking by uploading file data in stages and then firing a final commit message on the blob once all blocks have been staged.
I was able to achieve this using SharedAccessSignatures and the Azure JavaScript Libraries (there are many examples available online).
However I would like to handle this using pure C#. Where I am running into an issue is the IBrowserFile reference seems to try to load the entire file into memory rather than read in just the chunks it needs for each stage in the loop.
For simplicity sake my example code below does not include any Azure Blob Storage code. I am simply writing the chunking and commit messages to the console:
#page "/"
<InputFile OnChange="OnInputFileChange" />
#code{
async Task OnInputFileChange(InputFileChangeEventArgs e)
{
try
{
var file = e.File;
int blockSize = 1 * 1024 * 1024;//1 MB Block
int offset = 0;
int counter = 0;
List<string> blockIds = new List<string>();
using (var fs = file.OpenReadStream(5000000000)) //<-- Need to go up to 5GB
{
var bytesRemaining = fs.Length;
do
{
var dataToRead = Math.Min(bytesRemaining, blockSize);
byte[] data = new byte[dataToRead];
var dataRead = fs.Read(data, offset, (int)dataToRead);
bytesRemaining -= dataRead;
if (dataRead > 0)
{
var blockId = Convert.ToBase64String(System.Text.Encoding.UTF8.GetBytes(counter.ToString("d6")));
Console.WriteLine($"blockId:{blockId}");
Console.WriteLine(string.Format("Block {0} uploaded successfully.", counter.ToString("d6")));
blockIds.Add(blockId);
counter++;
}
}
while (bytesRemaining > 0);
Console.WriteLine("All blocks uploaded. Now committing block list.");
Console.WriteLine("Blob uploaded successfully!");
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
This first issue is that is that:
Synchronous reads are not supported.
So I tried:
var fs = new System.IO.MemoryStream();
await file.OpenReadStream(5000000000).CopyToAsync(fs);
using (fs)
{
...
}
But obviously I am now going to run into memory issues! And I do. The error on even a 200kb file is:
Out of memory
And anything over 1MB:
Garbage collector could not allocate 16384u bytes of memory for major heap section.
Is there a way to read in smaller chunks of data at a time from the IBrowserFile so this can be achieved natively in client side Blazor without having to resort to JavaScript?
.NET 6.0 has a nice Stream.CopyToAsync() implementation, which can be found here
https://github.com/microsoft/referencesource/blob/master/mscorlib/system/io/stream.cs
This will copy the data from one stream to an other asynchronously.
The gist of it is this:
private async Task CopyToAsyncInternal(Stream source, Stream destination, Int32 bufferSize, CancellationToken cancellationToken)
{
byte[] buffer = new byte[bufferSize];
int bytesRead;
while ((bytesRead = await source.ReadAsync(buffer, 0, buffer.Length, cancellationToken).ConfigureAwait(false)) != 0)
{
await destination.WriteAsync(buffer, 0, bytesRead, cancellationToken).ConfigureAwait(false);
}
}
(copied from link above)
Set the bufferSize to something like 4096 or a multiple and it should work. Other values are also possible, but usually block are taken as a multiple of 4k.
The assumption here is that you have a writable stream to which you can write the bytes asynchronously. You can modify this loop to count blocks and to other stuff per block. In any case don't use a memory stream client side or server side with large files.
I'm creating a smart home server and I want to support Google Smart Home actions. The server app is written in C# using HttpListener and I'm using Mono 5.18 (5.20 version have problems with httpcfg) to run it on Debian 10 server. The server app is working correctly but 1 out of 5 queries isn't received on program. Tcpdump is showing some type of traffic but app doesn't get any.
I tried reinstalling Debian 2 times, using different mono version, changing port, running it on Windows 10, disabling the part of the code that it provides MQTT and MySQL support, disabling firewall and nothing happen. The main problem is that Google Server isn't sending any packets after only one fail and I must disconnect and reconnect my devices in Google Home App.
There is my code with HttpListner:
static HttpListener listener;
//...
static void Main(string[] args)
{
//...
HttpServiceMain();
}
//...
private static void HttpServiceMain()
{
listener = new HttpListener();
listener.Prefixes.Add("https://*:2030/");
listener.Start();
while (true)
{
ProcessRequest();
}
}
static void ProcessRequest()
{
var result = listener.BeginGetContext(ListenerCallback, listener);
var startNew = Stopwatch.StartNew();
result.AsyncWaitHandle.WaitOne();
startNew.Stop();
Console.WriteLine("Elapsed miliseconds: " + startNew.ElapsedMilliseconds);
}
static void ListenerCallback(IAsyncResult ar)
{
Console.WriteLine("Listening...");
HttpListenerContext context = listener.EndGetContext(ar);
HttpListenerRequest request = context.Request;
string documentContents;
using (Stream receiveStream = request.InputStream)
{
using (StreamReader readStream = new StreamReader(receiveStream, Encoding.UTF8))
{
documentContents = readStream.ReadToEnd();
}
}
string responseString = "{}";
//Creating response and exporting it to 'responseString'
byte[] buffer = System.Text.Encoding.UTF8.GetBytes(responseString);
HttpListenerResponse httpResponse = context.Response;
httpResponse.StatusCode = 200;
httpResponse.StatusDescription = "OK";
httpResponse.ContentLength64 = buffer.Length;
System.IO.Stream output = httpResponse.OutputStream;
output.Write(buffer, 0, buffer.Length);
httpResponse.Close();
}
That I said, everything is working fine in 4 out of 5 times, but after some requests server didn't get query and I must reconnect Google Home App with my service. It is bug in HttpListener, in my code or in Google Server? Have you any ideas?
The fact that you didn’t get all the results while implementing the QUERY intent means that there is some problem while communicating with the server. You can try to troubleshoot following the Troubleshooting Guide, to see if we are sending requests to your server.
When you don’t get requests as you expected, the other culprit might also be the OAuth implementation, as you found out. If Google does not get valid Access Tokens for users, it might not send all the requests to your server.
Problem
I am trying to upload some data to a web-service.
I want to upload the data in chunks, and have the web-service read each chunk in turn. However, what I find in practice is that the web-service will only read a full buffer at a time.
Is there a way to get WebAPI (running self-hosted by Owin ideally, but I can use IIS if necessary) to respect the transfer chunks?
I have verified in Wireshark that my client is sending the data chunked hence why I believe this is a WebAPI issue.
For clarity, streaming data in the response works absolutely fine - my question is about reading chunked data from the request stream.
Code
The controller looks like this:
using System;
using System.Net;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
using System.Web.Http;
public class StreamingController : ApiController
{
[HttpPost]
public async Task<HttpResponseMessage> Upload()
{
var stream = await this.Request.Content.ReadAsStreamAsync();
var data = new byte[20];
int chunkCount = 1;
while (true)
{
// I was hoping that every time I sent a chunk, then
// ReadAsync would return, but I find that it will only
// return when I have sent 20 bytes of data.
var bytesRead = await stream.ReadAsync(data, 0, data.Length);
if (bytesRead <= 0)
{
break;
}
Console.WriteLine($"{chunkCount++}: {Encoding.UTF8.GetString(data)}");
}
return new HttpResponseMessage(HttpStatusCode.OK);
}
}
My test client looks like this:
void Main()
{
var url = "http://localhost:6001/streaming/upload";
var relayRequest = (HttpWebRequest)HttpWebRequest.Create(url);
relayRequest.Method = "POST";
relayRequest.AllowWriteStreamBuffering = false;
relayRequest.AllowReadStreamBuffering = false;
relayRequest.SendChunked = true;
relayRequest.ContentType = "application/octet-stream";
var stream = relayRequest.GetRequestStream();
string nextLine;
int totalBytes = 0;
// Read a series of lines from the console and transmit them to the server.
while(!string.IsNullOrEmpty((nextLine = Console.ReadLine())))
{
var bytes = Encoding.UTF8.GetBytes(nextLine);
totalBytes += bytes.Length;
Console.WriteLine(
"CLIENT: Sending {0} bytes ({1} total)",
bytes.Length,
totalBytes);
stream.Write(bytes, 0, bytes.Length);
stream.Flush();
}
var response = relayRequest.GetResponse();
Console.WriteLine(response);
}
Justification
My specific motivation is I am writing a HTTPS tunnel for an RTP client. However, this question would also make sense in the context of an instant-messaging chat application. You wouldn't want a partial chat message to come through, and then have to wait for message 2 to find out the end of message 1...!
The decoding of Transfer-Encoding: chunked happens a long way away from your controllers. Depending on your host, it may not even happen in the application at all, but be handled by the http.sys pipeline API that most servers plug into.
For your application to even have a chance of looking into this data, you'll need to move away from IIS/HttpListener and use Sockets instead.
Of interest might be the Nowin project, that provides all the OWIN features without using HttpListener, instead relying on the Socket async APIs. I don't know much about it, but there might be hooks to get at the stream before it gets decoded... Seems like a lot of effort though.
i think i'm missing something about how HttpWebRequest works via streaming when uploading large files.
basicly, i found out that i receive timeout exception when sending large files to the server, so a post suggested to do it via Async and handle the timeout myself.
The thing is, that after debugging, i found out that "GetRequestStreamAsync" method, and writing to it does nothing at the server side, the server is called only when doing GetResponseAsync
so my question is:
- code marked as //1 - it writes the file to the request stream, but i don't see that the memory is increasing, or the server even getting any request - where does the streaming go to?
This is basicly my code:
HttpWebRequest request = RESTUtils.InitializeRequest(...);
request.AllowWriteStreamBuffering = false;
request.ContentLength = i_InputStream.Length;
request.Timeout = 5000;
using (Stream requestStream = request.GetRequestStreamWithTimeout())
{
if (requestStream != null) //1
{
// We will write the stream to the request
byte[] buffer = new byte[UPLOAD_FILE_BUFFER_SIZE];
int read = i_InputStream.Read(buffer, 0, buffer.Length);
while (read > 0)
{
requestStream.Write(buffer, 0, read);
read = i_InputStream.Read(buffer, 0, buffer.Length);
}
}
}
using (var response = request.GetResponseWithTimeout(-1))
{
using (var responseStream = response.GetResponseStream())
{
}
}
public static class WebRequestExtensions
{
public static Stream GetRequestStreamWithTimeout(
this WebRequest request,
int? millisecondsTimeout = null)
{
return AsyncToSyncWithTimeout(
request.BeginGetRequestStream,
request.EndGetRequestStream,
millisecondsTimeout ?? request.Timeout);
}
public static WebResponse GetResponseWithTimeout(
this HttpWebRequest request,
int? millisecondsTimeout = null)
{
return AsyncToSyncWithTimeout(
request.BeginGetResponse,
request.EndGetResponse,
millisecondsTimeout ?? request.Timeout);
}
private static T AsyncToSyncWithTimeout<T>(
Func<AsyncCallback, object, IAsyncResult> begin,
Func<IAsyncResult, T> end,
int millisecondsTimeout)
{
var iar = begin(null, null);
if (!iar.AsyncWaitHandle.WaitOne(millisecondsTimeout))
{
var ex = new TimeoutException();
throw new WebException(ex.Message, ex, WebExceptionStatus.Timeout, null);
}
return end(iar);
}
}
Thanks!
== Edit 9/9/15 ==
Something even weirder happens, i'm attaching breakpoint right after GetResponseAsync, then i see that the server receives the call.
after that, i'm closing the process of the client -> the server is uploading the file successfully.
this happens also if i do "Abort".
anyone knows why?
Instead of using the old-style begin/end async pattern, you should consider switching to async/await which would greatly simplify your code.
You would then set the Timeout property against the request to a large value to accommodate your waiting time; then instead of using the callback-based async code, you could just do:
var request = SomeMethodToCreateRequest();
request.Timeout = int.MaxValue; // (don't do this)
var response = await request.GetResponse();
The timeout should be respected internally, and you get to simplify your code.
I have a code like this
public async void Start()
{
Logger.Log(Logger.LogLevel.General, "Beginning Listen!");
HttpListener listener = new HttpListener();
listener.Prefixes.Add(Statics.Config.IniReadValue("http-server"));
listener.Start();
while (true)
{
HttpListenerContext client = await listener.GetContextAsync();
AcceptClient(client);
}
}
public async void AcceptClient(HttpListenerContext client)
{
try
{
string sRequest = Helpers.GetRequestBody(client.Request);
if (sRequest == "")
return;
client.Response.ContentType = "application/json";
//Do a bunch of stuff here
string s = JsonConvert.SerializeObject(response);
byte[] byteArray = Encoding.UTF8.GetBytes(s);
client.Response.ContentLength64 = byteArray.Length;
client.Response.OutputStream.Write(byteArray, 0, byteArray.Length);
client.Response.OutputStream.Close();
client.Response.Close();
}
catch (Exception e)
{
Logger.Log(Logger.LogLevel.Error, e.ToString());
}
}
The code works perfectly fine on Windows using .Net but in my testing on Ubuntu 13.04 the client is dropped. I'm using Mono 3.2.1.
The code is for a RPC server which is connected from a C++ client I cannot change. The client expects the connection to remain open throughout the period and fails with broken pipe on unix and error code 5, eof on Windows when using this server with Mono.
There is no problem on connection but after the first command the client fails. There is no exception raised. Thanks for your help!
EDIT 1: I ripped apart the mono HttpListener and used it in my project directly and now it fails on .Net too. Definitely something's wrong with the code. P.S. this time it was the newest commit code.
My first question and I solved it myself :D
What I was doing wrong was that I was disposing the Request.InputStream stream myself which shouldn't be done. While .Net had no problems with me doing that Mono decided to check whether the connection could be reused or not and failed as the stream was disposed.
So removed the function disposing the stream and it works!