Read stream "freezes" despite data available after a certain amount of messages - c#

I'm working on a C# application that communicates with the clangd language server.
clangd is started as a separate process managed by my C# program and communication works via I/O redirection.
My program is exchanging language server protocol request-response pairs with clangd. Requests are sent to clangd via its process' StandardInput stream and responses are read using its process' StandardOutput stream. clangd emits debug information using its process' StandardError stream.
I am using async methods for reading and writing in order to keep the user interface responsive.
However, after sending the third textDocument/didOpen message, my program freezes while trying to read the response.
Making use of the StandardError stream I found out, that clangd processes the third textDocument/didOpen message correctly, as it emits debug messages, meaning the response should be available on StandardOutput.
I saved the requests in a file and sent those to a clangd instance running on the command line, which worked like a charm. I attached that file in case you need it.
Furthermore, the debug messages which read at the bottom of the SendRequest method indicate that the file was opened:
I[09:55:53.552] <-- textDocument/didOpen(26)
I[09:55:56.512] Updating file C:\Temp\crossrail\src\ClLogic.cpp with command [C:\Temp\crossrail\src] clang C:\Temp\crossrail\src\ClLogic.cpp -resource-dir=C:\Program Files (x86)\LLVM\bin\..\lib\clang\8.0.0
Below you can see the LSP client code for reading and writing the responses. I marked the location that gets blocked.
private Process languageServer;
private StreamWriter requestWriter;
private StreamReader responseReader;
private StreamReader errorReader;
// ...
public void Connect(String workingDirectory)
{
if (this.Connected == false)
{
this.currentMessageID = LSP_FIRST_MESSAGE_ID;
this.languageServer.StartInfo.WorkingDirectory = workingDirectory;
this.languageServer.Start();
this.Connected = true;
this.requestWriter = this.languageServer.StandardInput;
this.responseReader = this.languageServer.StandardOutput;
this.errorReader = this.languageServer.StandardError;
}
}
public async Task<String> Query<T>(JsonRpcRequest<T> request)
{
await mutex.WaitAsync();
try
{
await this.SendRequest(request);
return await this.ReadResponse();
}
finally
{
mutex.Release();
}
}
private async Task SendRequest<T>(JsonRpcRequest<T> request)
{
request.ID = this.currentMessageID;
++this.currentMessageID;
String requestBody = request.ToString();
Console.WriteLine(requestBody);
await this.requestWriter.WriteAsync(requestBody.ToCharArray(), 0, requestBody.Length);
await this.requestWriter.FlushAsync();
if (request.ID == 26) // ID of the third textDocument/didOpen message
{
//await this.ReadErrors(); // the debug messages following the third textDocument/didOpen request are printed correctly
}
}
private async Task<String> ReadResponse()
{
String contentLengthHeader = await this.responseReader.ReadLineAsync(); // blocks after the third textDocument/didOpen message
int responseLength = Int32.Parse
(
contentLengthHeader.Substring(contentLengthHeader.IndexOf(LSP_HEADER_KEY_VALUE_DELIMITER) + LSP_HEADER_VALUE_OFFSET)
.Trim()
);
await this.responseReader.ReadLineAsync();
char[] buffer = new char[BUFFER_SIZE];
StringBuilder response = new StringBuilder();
int totalReadBytes = 0;
while (totalReadBytes < responseLength)
{
int readBytes = await this.responseReader.ReadAsync(buffer, 0, BUFFER_SIZE);
response.Append(buffer, 0, readBytes);
totalReadBytes += readBytes;
}
Console.WriteLine(response.ToString());
return response.ToString();
}
public async Task SendFileCloseMessage(DocumentCloseRequest request)
{
await mutex.WaitAsync();
try
{
await this.SendRequest(request);
this.responseReader.DiscardBufferedData();
}
finally
{
mutex.Release();
}
}
Here is my code using the LSP client's methods for sending the textDocument/didOpen message:
private async Task InitializeLanguageServer(bool highlightFunctionDeclarations)
{
if (this.languageServer.Connected == false)
{
this.languageServer.Connect(this.workingDirectory);
await this.SendInitializationMessage();
}
await this.SendOpenFileMessage();
await this.LoadSymbolDeclarationLocations(highlightFunctionDeclarations);
await this.LoadSymbolUsages();
}
private async Task SendInitializationMessage()
{
InitializationRequest request = new InitializationRequest
(
this.workingDirectory,
System.Diagnostics.Process.GetCurrentProcess().Id,
false,
ApplicationSettings.LANGUAGE_SERVER_PROTOCOL_SUPPORTED_SYMBOLS
);
Console.WriteLine(await this.languageServer.Query(request));
}
private async Task SendOpenFileMessage()
{
DocumentOpenRequest request = new DocumentOpenRequest(this.filePath, "cpp", 1, this.SourceCode);
Console.WriteLine(await this.languageServer.Query(request));
}
InitializeLanguageServer is called in the constructor without await, but that shouldn't be a problem, as clangd is fast enough to process every source code file in a maximum of 2.5 seconds.
The languageServer member is retrieved using TinyIoC:
public SourceViewerVM()
{
// ...
this.languageServer = TinyIoCContainer.Current.Resolve<LanguageServerProtocolClient>();
#pragma warning disable CS4014
this.InitializeLanguageServer(highlightFunctionDeclarations);
#pragma warning restore CS4014
}
Edit:
The reading really blocks and isn't just waiting for a new line character. If I put the follwing code at my breakpoint which usually reads the StandardError, execution is blocked too:
if (request.ID == 26) // 26 is the ID of the third textRequest/didOpen message
{
char[] buffer = new char[BUFFER_SIZE];
int readBytes = await this.responseReader.ReadAsync(buffer, 0, BUFFER_SIZE); // blocking
Console.WriteLine(new String(buffer, 0, readBytes));
//await this.ReadErrors();
}

Related

Using NamedPipeServerStream and NamedPipeClientStream asynchronously

I have the following requirements for a server/client architecture:
Write a server/client that works asynchronously.
The communication needs to be a duplex, i.e., reads and writes on both ends.
Multiple clients can connect to the server at any given time.
Server/client should wait until they become available and finally make a connection.
Once a client connects it should write to the stream.
Then the server should read from the stream and write response back to the client.
Finally, the client should read the response and the communication should end.
So with the following requirements in mind I've written the following code but I'm not too sure about it because the docs for pipes are somewhat lacking, unfortunately and the code doesn't seems to work correctly, it hangs at a certain point.
namespace PipesAsyncAwait471
{
using System;
using System.Collections.Generic;
using System.IO.Pipes;
using System.Linq;
using System.Threading.Tasks;
internal class Program
{
private static async Task Main()
{
List<Task> tasks = new List<Task> {
HandleRequestAsync(),
};
tasks.AddRange(Enumerable.Range(0, 10).Select(i => SendRequestAsync(i, 0, 5)));
await Task.WhenAll(tasks);
}
private static async Task HandleRequestAsync()
{
using (NamedPipeServerStream server = new NamedPipeServerStream("MyPipe",
PipeDirection.InOut,
NamedPipeServerStream.MaxAllowedServerInstances,
PipeTransmissionMode.Message,
PipeOptions.Asynchronous))
{
Console.WriteLine("Waiting...");
await server.WaitForConnectionAsync().ConfigureAwait(false);
if (server.IsConnected)
{
Console.WriteLine("Connected");
if (server.CanRead) {
// Read something...
}
if (server.CanWrite) {
// Write something...
await server.FlushAsync().ConfigureAwait(false);
server.WaitForPipeDrain();
}
server.Disconnect();
await HandleRequestAsync().ConfigureAwait(false);
}
}
}
private static async Task SendRequestAsync(int index, int counter, int max)
{
using (NamedPipeClientStream client = new NamedPipeClientStream(".", "MyPipe", PipeDirection.InOut, PipeOptions.Asynchronous))
{
await client.ConnectAsync().ConfigureAwait(false);
if (client.IsConnected)
{
Console.WriteLine($"Index: {index} Counter: {counter}");
if (client.CanWrite) {
// Write something...
await client.FlushAsync().ConfigureAwait(false);
client.WaitForPipeDrain();
}
if (client.CanRead) {
// Read something...
}
}
if (counter <= max) {
await SendRequestAsync(index, ++counter, max).ConfigureAwait(false);
}
else {
Console.WriteLine($"{index} Done!");
}
}
}
}
}
Assumptions:
The way I expect it to work is for all the requests I make when I call SendRequestAsync to execute concurrently where each request then makes additional requests until it reaches 6 and finally, it should print "Done!".
Remarks:
I've tested it on .NET Framework 4.7.1 and .NET Core 2.0 and I get the same results.
The communication between clients and the server is always local to the machine where clients are web applications that can queue some jobs like launching 3rd-party processes and the server is going to be deployed as a Windows service on the same machine as the web server that these clients are deployed on.
When disconnecting, WaitForPipeDrain() can throw an IOException due to a broken pipe.
If this happens in your server Task, then it will never listen for the next connection, and all of the remaining client connections hang on ConnectAsync().
If this happens in one of the client Tasks, then it will not continue to recurse and increment the counter for that index.
If you wrap the call to WaitForPipeDrain() in a try/catch, the program will continue running forever, because your function HandleRequestAsync() is infinitely recursive.
In short, to get this to work:
Handle IOException from WaitForPipeDrain()
HandleRequestAsync() has to finish at some point.
Here is the complete code after some iterations:
PipeServer.cs:
namespace AsyncPipes;
using System.Diagnostics.CodeAnalysis;
using System.IO.Pipes;
public static class PipeServer
{
public static void WaitForConnection()
=> WaitForConnectionInitializer();
private static void WaitForConnectionInitializer()
{
var context = new ServerContext();
var server = context.Server;
try
{
Console.WriteLine($"Waiting a client...");
server.BeginWaitForConnection(WaitForConnectionCallback, context);
}
catch
{
// We need to cleanup here only when something goes wrong.
context.Dispose();
throw;
}
static void WaitForConnectionCallback(IAsyncResult result)
{
var (context, server, _) = ServerContext.FromResult(result);
server.EndWaitForConnection(result);
WaitForConnectionInitializer();
BeginRead(context);
}
static void BeginRead(ServerContext context)
{
var (_, server, requestBuffer) = context;
server.BeginRead(requestBuffer, 0, requestBuffer.Length, ReadCallback, context);
}
static void BeginWrite(ServerContext context)
{
var (_, server, responseBuffer) = context;
server.BeginWrite(responseBuffer, 0, responseBuffer.Length, WriteCallback, context);
}
static void ReadCallback(IAsyncResult result)
{
var (context, server, requestBuffer) = ServerContext.FromResult(result);
var bytesRead = server.EndRead(result);
if (bytesRead > 0)
{
if (!server.IsMessageComplete)
{
BeginRead(context);
}
else
{
var index = BitConverter.ToInt32(requestBuffer, 0);
Console.WriteLine($"{index} Request.");
BeginWrite(context);
}
}
}
static void WriteCallback(IAsyncResult result)
{
var (context, server, responseBuffer) = ServerContext.FromResult(result);
var index = -1;
try
{
server.EndWrite(result);
server.WaitForPipeDrain();
index = BitConverter.ToInt32(responseBuffer, 0);
Console.WriteLine($"{index} Pong.");
}
finally
{
context.Dispose();
Console.WriteLine($"{index} Disposed.");
}
}
}
private sealed class ServerContext : IDisposable
{
[NotNull]
public byte[]? Buffer { get; private set; } = new byte[4];
[NotNull]
public NamedPipeServerStream? Server { get; private set; } = new ("PipesDemo",
PipeDirection.InOut,
NamedPipeServerStream.MaxAllowedServerInstances,
PipeTransmissionMode.Message,
PipeOptions.Asynchronous);
public void Deconstruct(out ServerContext context, out NamedPipeServerStream server, out byte[] buffer)
=> (context, server, buffer) = (this, Server, Buffer);
public static ServerContext FromResult(IAsyncResult result)
{
ArgumentNullException.ThrowIfNull(result.AsyncState);
return (ServerContext)result.AsyncState;
}
public void Dispose()
{
if (Server is not null)
{
if (Server.IsConnected)
{
Server.Disconnect();
}
Server.Dispose();
}
Server = null;
Buffer = null;
}
}
}
PipeClient:
public static class PipeClient
{
public static void CreateConnection(int index)
{
using var client = new NamedPipeClientStream(".", "PipesDemo", PipeDirection.InOut, PipeOptions.None);
client.Connect();
var requestBuffer = BitConverter.GetBytes(index);
client.Write(requestBuffer, 0, requestBuffer.Length);
client.Flush();
client.WaitForPipeDrain();
Console.WriteLine($"{index} Ping.");
var responseBuffer = new byte[4];
var bytesRead = client.Read(responseBuffer, 0, responseBuffer.Length);
while (bytesRead > 0)
{
bytesRead = client.Read(responseBuffer, bytesRead - 1, responseBuffer.Length - bytesRead);
}
index = BitConverter.ToInt32(responseBuffer, 0);
Console.WriteLine($"{index} Response.");
}
}
Program.cs:
namespace AsyncPipes;
internal class Program
{
private const int MaxRequests = 1000;
private static void Main()
{
var tasks = new List<Task>
{
Task.Run(PipeServer.WaitForConnection)
};
tasks.AddRange(Enumerable.Range(0, MaxRequests - 1)
.Select(i => Task.Factory.StartNew(() => PipeClient.CreateConnection(i),
TaskCreationOptions.LongRunning)));
Task.WaitAll(tasks.ToArray());
Console.ReadKey();
}
}
You can sort the messages and observe the following:
Connections are opened and closed correctly.
Data is sent and received correctly.
Finally, the server still waits for further connections.
Updates:
Changed PipeOptions.Asynchronous to PipeOptions.None otherwise it seems like it hangs for the duration of the requests and only then processing them at once.
PipeOptions.Asynchronous is simply causing a different order of execution than PipeOptions.None, and that's exposing a race condition / deadlock in your code. You can see the effect of it if you use Task Manager, for example, to monitor the thread count of your process... you should see it creeping up at a rate of appx 1 thread per second, until it gets to around 100 threads (maybe 110 or so), at which point your code runs to completion. Or if you add ThreadPool.SetMinThreads(200, 200) at the beginning. Your code has a problem where if the wrong ordering occurs (and that's made more likely by using Asynchronous), you create a cycle where it can't be satisfied until there are enough threads to run all of the concurrent ConnectAsyncs your main method has queued, which aren't truly async and instead just create a work item to invoke the synchronous Connect method (this is unfortunate, and it's issues like this that are one of the reasons I urge folks not to expose async APIs that simply queue works items to call sync methods). Source.
Revised and simplified the example:
There's no true asynchronous Connect method for pipes, ConnectAsync uses Task.Factory.StartNew behind the scene so you might just as well use Connect and then pass the method (SendRequest in our example) that calls the synchronous Connect version to Task.Factory.StartNew.
The server is completely asynchronous now and as far as I can tell it works with no issues.
Fixed all of the BeginXXX/EndXXX methods.
Removed unnecessary try/catch blocks.
Removed unnecessary messages.
Refactor the code a bit to make it more readable and concise.
Removed the async/await version of the server as I refactored the code and didn't have time to update the async/await version but with the above version you can have an idea of how to do it and the new APIs are much more friendly and easy to deal with.
I hope it helps.

Problems with asynchronous functions with TcpListener and TcpClient, function not waiting on await keyword

I am new to asynchronous socket programming, and I am having problems with my asynchronous functions.
I am trying to create a chat program that uses Windows Forms for the client, and a console application for the server.
Here is the code for handling connections on my server:
public async void StartServer()
{
TcpListener listener = new TcpListener(_ip, _port);
listener.Start();
Console.WriteLine("Server is running on IP: {0} Port: {1}", _ip.ToString(), _port);
while (true)
{
try
{
TcpClient client = await listener.AcceptTcpClientAsync();
HandleConnections(client);
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
}
private async void HandleConnections(TcpClient client)
{
NetworkStream stream = client.GetStream();
byte[] buffer = new byte[256];
string message = null;
int x;
while(stream.DataAvailable)
{
x = await stream.ReadAsync(buffer, 0, buffer.Length);
message += Encoding.ASCII.GetString(buffer);
}
message = message.Replace('\0', ' ');
message = message.Trim();
Console.WriteLine("Message Recieved: " + message);
byte[] bytes = Encoding.ASCII.GetBytes(message);
await stream.WriteAsync(bytes, 0, bytes.Length);
stream.Close();
}
And here is the code for the client program connecting to the server:
private async void ConnectButton_Click(object sender, EventArgs e)
{
IPAddress address = IPAddress.Parse(IPInput.Text);
client = new TcpClient();
await client.ConnectAsync(address, 12345);
NetworkStream stream = client.GetStream();
string message = UsernameInput.Text + " Connected!";
Task<int> sendTask = SendMessage(stream, message);
int sendComp = await sendTask;
Task<string> recieveTask = RecieveMessage(stream);
string recieved = await recieveTask;
stream.Close();
ChatText.AppendText(recieved);
}
private async Task<int> SendMessage(NetworkStream stream, string message)
{
byte[] bytes = Encoding.ASCII.GetBytes(message + "\r\n");
await stream.WriteAsync(bytes, 0, bytes.Length);
return 1;
}
private async Task<string> RecieveMessage(NetworkStream stream)
{
byte[] buffer = new byte[256];
string message = null;
int x;
while (stream.DataAvailable)
{
x = await stream.ReadAsync(buffer, 0, buffer.Length);
message += Encoding.ASCII.GetString(buffer);
}
return message;
}
The first problem that I am having is when I run the client program and click the ConnectButton, the message gets sent to the server program which outputs Message Recieved: user Connected!, but then the client program encounters a null reference exception on the line ChatText.AppendText(recieved); saying that the recieved variable is null. It seems that the line string recieved = await recieveTask; is not waiting for the task to finish executing, and it jumps to the next line without assigning a value to recieved. If I put a breakpoint at the top of the private async Task<string> RecieveMessage(NetworkStream stream) function and step through it, then the recieved variable gets it's value and the code will complete successfully, but without the breakpoint I get the null reference exception.
The next issue that I am having, is if I leave the server running and open the client again and try connecting, the server gets a null reference exception on the line message = message.Replace('\0', ' ');. The first time I run with the client, the server receives the message successfully, but the second time, it doesn't get any data from the stream and leaves the variable null, resulting in a null reference exception.
I apologize if my code is garbage, I have been reading the MSDN documentation for hours and am unable to come up with a solution, and I feel like I am doing this completely wrong. So my questions are as follows:
What is causing these errors that I am encountering? And am I approaching this problem the right way?
Both of your issues are not related to asynchronous functions, and actually both issues are because of the same problem:
while (stream.DataAvailable)
{
// read stream here
}
If data is not yet available to read from the stream - both of your ReceiveMessage and HandleConnections functions just skip reading stream at all. What you should do instead (in your case) is:
do
{
// read your stream here
} while (stream.DataAvailable);
Then first Read (or ReadAsync) will wait until first data chunk arrives, and only after first chunk will check if more data is already available.
Also note that you use large buffer (256 bytes) while client\server send short messages (like "Client received: xxx"), which means most of the buffer is empty and when you convert it to string via Encoding.ASCII.GetString - you get a lot of whitespace at the end ("Client received: xxx ... ").
It doesn't look like a problem with async/await so much as an issue with your TCP streams.
You don't appear to be actually waiting for a response. SendMessage writes the server, then RecieveMessage expects a response to already be in the stream.
If stream.DataAvailable is false when you hit the while loop for the first time, message will remain null.
You need some way to wait for there to be data in the stream before you attempt to read from it.

Live FLV streaming in C# WebApi

Currently I have a working live stream using webapi. By receiving a flv stream directly from ffmpeg and sending it straight to the client using PushStreamContent. This works perfectly fine if the webpage is already open when the stream starts. The issue is when I open another page or refresh this page you can no longer view the stream (the stream is still being sent to the client fine). I think it is due to something missing from the start of the stream but I am not sure what to do. Any pointers would be greatly appreciated.
Code for client reading stream
public class VideosController : ApiController
{
public HttpResponseMessage Get()
{
var response = Request.CreateResponse();
response.Content = new PushStreamContent(WriteToStream, new MediaTypeHeaderValue("video/x-flv"));
return response;
}
private async Task WriteToStream( Stream arg1, HttpContent arg2, TransportContext arg3 )
{
//I think metadata needs to be written here but not sure how
Startup.AddSubscriber( arg1 );
await Task.Yield();
}
}
Code for receiving stream and then sending to client
while (true)
{
bytes = new byte[8024000];
int bytesRec = handler.Receive(bytes);
foreach (var subscriber in Startup.Subscribers.ToList())
{
var theSubscriber = subscriber;
try
{
await theSubscriber.WriteAsync( bytes, 0, bytesRec );
}
catch
{
Startup.Subscribers.Remove(theSubscriber);
}
}
}
I've never used FLV or studied video formats closely
Most file formats are structured, especially video formats. They contain frames (i.e. a complete or partial screen shots depending on the compression format).
You should be really lucky if you manage to hit a specific frame when you start streaming to the new subscriber. Hence when they start receiving the stream they cannot identify the format as frame is partial.
You can read more FLV frames in wikipedia article. This is most likely your problem.
A simple attempt would be to try to save the initial header that you receive from the streaming server when the first subscriber connects.
Something like:
static byte _header = new byte[9]; //signature, version, flags, headerSize
public void YourStreamMethod()
{
int bytesRec = handler.Receive(bytes);
if (!_headerIsStored)
{
//store header
Buffer.BlockCopy(bytes, 0, _header, 0, 9);
_headerIsStored = true;
}
}
.. which allows you to send the header to the next connecting subscriber:
private async Task WriteToStream( Stream arg1, HttpContent arg2, TransportContext arg3 )
{
// send the FLV header
arg1.Write(_header, 0, 9);
Startup.AddSubscriber( arg1 );
await Task.Yield();
}
Once done, pray that the receiver will ignore partial frames. If it doesn't you need to analyze the stream to identify where the next frame is.
To do that you need to do something like this:
Create a BytesLeftToNextFrame variable.
Store the received packet header in a buffer
Convert the "Payload size" bits to an int
Reset the BytesLeftToNextFrame to the parsed value
Countdown until the next time you should read a header.
Finally, when a new client connects, do not start streaming until you know that the next frame arrives.
Pseudo code:
var bytesLeftToNextFrame = 0;
while (true)
{
bytes = new byte[8024000];
int bytesRec = handler.Receive(bytes);
foreach (var subscriber in Startup.Subscribers.ToList())
{
var theSubscriber = subscriber;
try
{
if (subscriber.IsNew && bytesLeftToNextFrame < bytesRec)
{
//start from the index where the new frame starts
await theSubscriber.WriteAsync( bytes, bytesLeftToNextFrame, bytesRec - bytesLeftToNextFrame);
subscriber.IsNew = false;
}
else
{
//send everything, since we've already in streaming mode
await theSubscriber.WriteAsync( bytes, 0, bytesRec );
}
}
catch
{
Startup.Subscribers.Remove(theSubscriber);
}
}
//TODO: check if the current frame is done
// then parse the next header and reset the counter.
}
I'm not a expert in streaming, but looks like you should close stream then all data will be writed
await theSubscriber.WriteAsync( bytes, 0, bytesRec );
Like it mentions in WebAPI StreamContent vs PushStreamContent
{
// After save we close the stream to signal that we are done writing.
xDoc.Save(stream);
stream.Close();
}
I LIKE THIS CODE BECAUSE IT DEMONSTRATES A FUNDAMENTAL ERROR when dealing with async programming
while (true)
{
}
this is a synced loop, that loops itself as fast as possible.. every second it can execute thousands of times (depending on availabe software and hardware resources)
await theSubscriber.WriteAsync( bytes, 0, bytesRec );
this is an async command (if that wasn't clear enough) that execute in a DIFFERENT thread (while loop representes the main thread execution)
now... in order to make the while loop to wait to the async command we use await... sounds good (or else the while loop will execute thousands of times, executing countless async commands)
BUT because the loop (of subscribers) need to transmit the stream for all subscribers simulatanly it get stucked by the await keyword
THAT IS WHY RELOAD / NEW SUBSCRIBER FREEZE THE WHOLE THING (new connection = new subscriber)
conclusion: the entire for loop should be inside a Task. the Task need to wait until the server send the stream to all subscribers. ONLY THEN it should continue to the while loop with ContinueWith (that is why it called like that, right?)
so... the write command need to get execute without await keyword
theSubscriber.WriteAsync
the foreach loop should use a task that continue with the while loop after it is done

async Task that needs to wait for IObserver

I have an IObserver class that writes packets to a stream and waits for the correct response, however I am not happy with part of the code:
bool ResponseReceived = false;
public async Task<IResponse> WriteAsync(Stream stream, bool returnResponse = false, bool flush = true, CancellationToken token = default(CancellationToken))
{
if (returnResponse)
{
//subscribe to IObserveable
PacketRouter router = new PacketRouter();
Subscribe(router);
//write the packet to the stream
await base.WriteAsync(stream, flush, token);
//I dont like the way this is done, is it possible to use task.WhenAny or WhenAll or even something I havent tried
if (!ResponseReceived)
{
var ts = TimeSpan.FromSeconds(Timeout);
DateTime maximumTime = DateTime.Now + ts;
while (!ResponseReceived && DateTime.Now < maximumTime)
{
await Task.Delay(10);
}
}
//Unsubscribe when the correct type of packet has been received or it has timed out
Unsubscribe();
}
else
{
//we dont need the return packet so we will just write to the stream and exit
await base.WriteAsync(stream, flush, token);
}
//return the response packet
return ResponseData;
}
public virtual void OnNext(Packet packet)
{
//when a packet is received, validate it
if (ValidResponse(packet))
{
//if valid set the response data
ResponseData.Payload = packet.Payload;
ResponseReceived = true; //The right to return the response is set here
}
}
I have tried using TaskCompletionResult and Task.WaitAny(responseReceived, TaskDelay(ts)); but I couldn't get it to work either.
Is there a a better way to do this?!?
Updated with a little more context:
The Write class does not read a packet. A separate class (PacketHandler) does this and then passes it to an IObservable Class for dissemination to any class that wishes to listen. The reason for this is broadcast messages are also received which may come between the request and the response, also other packets maybe waiting for a response (although this should never technically happen).
You can directly await an observable, like so:
var router = new PacketRouter();
// write the packet to the stream
await base.WriteAsync(stream, flush, token);
try
{
// await the observable PacketRouter.
Packet p = await router
.FirstAsync()
.Timeout(DateTime.Now.AddSeconds(Timeout));
}
catch(TimeoutException)
{
// ...
}

Simple Task-returning Asynchronous HtppListener with async/await and handling high load

I have created the following simple HttpListener to serve multiple requests at the same time (on .NET 4.5):
class Program {
static void Main(string[] args) {
HttpListener listener = new HttpListener();
listener.Prefixes.Add("http://+:8088/");
listener.Start();
ProcessAsync(listener).ContinueWith(task => { });
Console.ReadLine();
}
static async Task ProcessAsync(HttpListener listener) {
HttpListenerContext ctx = await listener.GetContextAsync();
// spin up another listener
Task.Factory.StartNew(() => ProcessAsync(listener));
// Simulate long running operation
Thread.Sleep(1000);
// Perform
Perform(ctx);
await ProcessAsync(listener);
}
static void Perform(HttpListenerContext ctx) {
HttpListenerResponse response = ctx.Response;
string responseString = "<HTML><BODY> Hello world!</BODY></HTML>";
byte[] buffer = Encoding.UTF8.GetBytes(responseString);
// Get a response stream and write the response to it.
response.ContentLength64 = buffer.Length;
Stream output = response.OutputStream;
output.Write(buffer, 0, buffer.Length);
// You must close the output stream.
output.Close();
}
}
I use Apache Benchmark Tool to load test this. When I make a 1 request, I get the max wait time for a request as 1 second. If I make 10 requests, for example, max wait time for a response goes up to 2 seconds.
How would you change my above code to make it as efficient as it can be?
Edit
After #JonSkeet's answer, I changed the code as below. Initially, I tried to simulate a blocking call but I guess it was the core problem. So,I took #JonSkeet's suggestion and change that to Task.Delay(1000). Now, the below code gives max. wait time as approx. 1 sec for 10 concurrent requests:
class Program {
static bool KeepGoing = true;
static List<Task> OngoingTasks = new List<Task>();
static void Main(string[] args) {
HttpListener listener = new HttpListener();
listener.Prefixes.Add("http://+:8088/");
listener.Start();
ProcessAsync(listener).ContinueWith(async task => {
await Task.WhenAll(OngoingTasks.ToArray());
});
var cmd = Console.ReadLine();
if (cmd.Equals("q", StringComparison.OrdinalIgnoreCase)) {
KeepGoing = false;
}
Console.ReadLine();
}
static async Task ProcessAsync(HttpListener listener) {
while (KeepGoing) {
HttpListenerContext context = await listener.GetContextAsync();
HandleRequestAsync(context);
// TODO: figure out the best way add ongoing tasks to OngoingTasks.
}
}
static async Task HandleRequestAsync(HttpListenerContext context) {
// Do processing here, possibly affecting KeepGoing to make the
// server shut down.
await Task.Delay(1000);
Perform(context);
}
static void Perform(HttpListenerContext ctx) {
HttpListenerResponse response = ctx.Response;
string responseString = "<HTML><BODY> Hello world!</BODY></HTML>";
byte[] buffer = Encoding.UTF8.GetBytes(responseString);
// Get a response stream and write the response to it.
response.ContentLength64 = buffer.Length;
Stream output = response.OutputStream;
output.Write(buffer, 0, buffer.Length);
// You must close the output stream.
output.Close();
}
}
It looks to me like you'll end up with a bifurcation of listeners. Within ProcessAsync, you start a new task to listen (via Task.Factory.StartNew), and then you call ProcessAsync again at the end of the method. How can that ever finish? It's not clear whether that's the cause of your performance problems, but it definitely looks like an issue in general.
I'd suggest changing your code to be just a simple loop:
static async Task ProcessAsync(HttpListener listener) {
while (KeepGoing) {
var context = await listener.GetContextAsync();
HandleRequestAsync(context);
}
}
static async Task HandleRequestAsync(HttpListenerContext context) {
// Do processing here, possibly affecting KeepGoing to make the
// server shut down.
}
Now currently the above code ignores the return value of HandleRequestAsync. You may want to keep a list of the "currently in flight" tasks, and when you've been asked to shut down, use await Task.WhenAll(inFlightTasks) to avoid bringing the server down too quickly.
Also note that Thread.Sleep is a blocking delay. An asynchronous delay would be await Task.Delay(1000).

Categories