I have two C# app running simultaneously. One is a C# console app that sends REST calls to the other and the other is running a Nancy self hosted rest server.
Here's a really basic view of some of the code of both parts.
RestsharpClient:
public async void PostAsyncWhile()
{
IRestClient RestsharpClient = new RestClient("http://localhost:50001");
var request = new RestRequest($"/nancyserver/whileasync", Method.POST);
var asyncHandle = RestsharpClient.ExecuteAsync(request, response =>
{
Console.WriteLine(response.Content.ToString());
});
await Task.Delay(1000);
asyncHandle.Abort();//PROBLEM
}
NancyRestServer:
public class RestModule : NancyModule
{
public RestModule() : base("/nancyserver")
{
Post("/whileasync", async (args, ctx) => await WhileAsync(args, ctx));
}
private async Task<bool> WhileAsync(dynamic args, CancellationToken ctx)
{
do
{
await Task.Delay(100);
if (ctx.IsCancellationRequested)//PROBLEM
{
break;
}
}
while (true);
return true;
}
}
The client sends a command to start a while loop on the server, waits for 1sec and aborts the async call.
The problem I'm having is that the asyncHandle.Abort(); doesn't seem to trigger the ctx.IsCancellationRequested on the server's side.
How am I supposed to cancel/abort an async call on a nancy host server using a restsharp client?
There's a couple of parts that go into cancelling a web request.
First, the client must support cancellation and use that cancellation to clamp the connection closed. So the first thing to do is to monitor the network packets (e.g., using Wireshark) and make sure RestSharp is sending a RST (reset), and not just closing its side of the connection. An RST is a signal to the other side that you want to cancel the request. Closing the connection is a signal to the other side that you're done sending data but are still willing to receive a response.
Next, the server must support detecting cancellation. Not all servers do. For quite a while, ASP.NET (pre-Core) did not properly support client-initiated request cancellation. There was some kind of race condition where it wouldn't cancel properly (don't remember the details), so they disabled that cancellation path. This has all been sorted out in ASP.NET Core, but AFAIK, it was never fixed in ASP.NET Legacy Edition.
If RestSharp is not sending an RST, then open an issue with that team. If Nancy is not responding to the RST, then open an issue with that team. If it's an ASP.NET issue, the Nancy team should know that (and respond informing you they can't fix it), in which case you're out of luck. :/
Related
I'm doing SignalR server-to-client streaming using System.Threading.Channel with a .NET client. The usage is fairly basic, similar to what is described in the introductory docs.
The hub code is similar to this:
public ChannelReader<byte[]> Retrieve(Guid id, CancellationToken cancellationToken)
{
var channel = Channel.CreateBounded<byte[]>(_limit);
_ = WriteItemsAsync(channel.Writer, id, cancellationToken);
return channel.Reader;
}
private async Task WriteItemsAsync(ChannelWriter<byte[]> writer, Guid id, CancellationToken cancellationToken)
{
Exception localException = null;
try
{
//loop and write to the ChannelWriter until finished
}
catch (Exception ex)
{
localException = ex;
}
finally
{
writer.Complete(localException);
}
}
And client similar to this:
var channel = await hubConnection.StreamAsChannelAsync<byte[]>("Retrieve", _guid, cancellationTokenSource.Token);
while (await channel.WaitToReadAsync())
{
while (channel.TryRead(out var data))
{
//handle data
}
}
When my hub method is done streaming, it calls Complete() on its ChannelWriter. SignalR, presumably, internally sees the Complete call on the corresponding ChannelReader, translates that into an internal SignalR message and delivers that to the client. The client's own ChannelReader is then marked as complete by SignalR and my client code wraps up its own work on the stream.
Is that "completed" notification, from server to client, guaranteed to be delivered? In other cases where the hub is broadcasting non-streaming messages to clients, it generally "fires and forgets", but I've got to assume that calling Complete on the streaming Channel has acknowledged delivery, otherwise the client could be in a state where it is holding a streaming ChannelReader open indefinitely while the server sees the stream as closed.
Less important to the question, but the reason I ask is that I am trying to narrow down just such a scenario where a data flow pipeline that consumes a SignalR streaming interface very occasionally hangs, and it seems that the only point where it is hanging is somewhere in the SignalR client.
I asked the devs over on github: https://github.com/dotnet/aspnetcore/issues/30128.
Here is a quick summary:
"Messages over SignalR are as reliable as TCP. Basically what this means is that either you get the message, or the connection closes. In both cases the channel on the client side will be completed."
And:
"Right it should never hang forever. It'll either send the complete message or disconnect. Either way, a hang would be a bug in the client"
For a project, I need to implement SSE (Server Sent Events) in a C# Application. Although this may sound easy, I've got no clue how to solve this.
As I'm new to C# (though, not new to programming in general) I took an excursion to Google and tried to look for some sample code. From what I've seen so far, I could learn to build a HTTP Server with C# or consume server sent events. But I found nothing about sending SSEs.
What I'm trying to get my head around: how can I keep sending updated data over the incoming request? Normally, you get a request, do your thing and reply. Done, connection closed. But in this case I want to kind of "stick" to the response-stream and send new data through, each time an event in my application fires.
The problem, for me, lies in this event-based approach: it's not some intervall-based polling and updating. It's rather that the app goes like "Hey, something happend. I really should tell you about it!"
TL;DR: how can I hold on to that response-stream and send updates - not based on loops or timers, but each time certain events fire?
Also, before I forget: I know, there are libraries out there doing just that. But from what I've seen so far (and from what I've understood; correct me if I'm wrong) those solutions depend on ASP.NET / MVC / you name it. And as I'm just writing a "plain" C# application, I don't think I meet these requirements.
As for a light-weight server I would go with an OWIN selfhost WebAPI (https://learn.microsoft.com/en-us/aspnet/web-api/overview/hosting-aspnet-web-api/use-owin-to-self-host-web-api).
A simple server-sent event server action would basically go like:
public class EventController : ApiController
{
public HttpResponseMessage GetEvents(CancellationToken clientDisconnectToken)
{
var response = Request.CreateResponse();
response.Content = new PushStreamContent(async (stream, httpContent, transportContext) =>
{
using (var writer = new StreamWriter(stream))
{
using (var consumer = new BlockingCollection<string>())
{
var eventGeneratorTask = EventGeneratorAsync(consumer, clientDisconnectToken);
foreach (var #event in consumer.GetConsumingEnumerable(clientDisconnectToken))
{
await writer.WriteLineAsync("data: " + #event);
await writer.WriteLineAsync();
await writer.FlushAsync();
}
await eventGeneratorTask;
}
}
}, "text/event-stream");
return response;
}
private async Task EventGeneratorAsync(BlockingCollection<string> producer, CancellationToken cancellationToken)
{
try
{
while (!cancellationToken.IsCancellationRequested)
{
producer.Add(DateTime.Now.ToString(), cancellationToken);
await Task.Delay(1000, cancellationToken).ConfigureAwait(false);
}
}
finally
{
producer.CompleteAdding();
}
}
}
The important part here is the PushStreamContent, which basically just sends the HTTP headers and then leaves the connection open to write the data when it is available.
In my example the events are generated in an extra-task which is given a producer-consumer collection and adds the events (here the current time every second) if they are available to the collection. Whenever a new event arrives GetConsumingEnumerable is automatically notified. The new event is then written in the proper server-sent event format to the stream and flushed. In practice you would need to send some pseudo-ping events every minute or so, as streams which are left open for a long time without data being sent over them would be closed by the OS/framework.
The sample client code to test this would go like:
Write the following code in async method.
using (var client = new HttpClient())
{
using (var stream = await client.GetStreamAsync("http://localhost:9000/api/event"))
{
using (var reader = new StreamReader(stream))
{
while (true)
{
Console.WriteLine(reader.ReadLine());
}
}
}
}
This sounds like a good fit for SignalR. Note SignalR is part of the ASP.NET family, however this does NOT require the ASP.NET framework (System.Web), or IIS, as mentioned in comments.
To clarify, SignalR is part of ASP.NET. According to their site:
ASP.NET is an open source web framework for building modern web apps
and services with .NET. ASP.NET creates websites based on HTML5, CSS,
and JavaScript that are simple, fast, and can scale to millions of
users.
SignalR has no hard dependency on System.Web or on IIS.
You can self-host your ASP.Net application (see https://learn.microsoft.com/en-us/aspnet/signalr/overview/deployment/tutorial-signalr-self-host). If you use .net core, it is actually self-hosted by default and runs as a normal console application.
Has anyone has any good experience with opening a websocket connection inside MVC controller?
Technology stack: ASPNET Core 1.0 (RC1) MVC, dnx46, System.Net.WebSockets
Why MVC instead of middleware: for overall consistency, routing, already injected repositories, an option to call private methods in the same controller.
[HttpGet("v1/resources/{id}")]
public async Task<IActionResult> GetAsync(string id)
{
var resource = await this.repository.GetAsync(id);
if (resource == null)
{
return new HttpStatusCodeResult(404);
}
if (this.HttpContext.WebSockets.IsWebSocketRequest)
{
var webSocket = await this.HttpContext.WebSockets.AcceptWebSocketAsync();
if (webSocket != null && webSocket.State == WebSocketState.Open)
{
while (true)
{
var response = string.Format("Hello! Time {0}", System.DateTime.Now.ToString());
var bytes = System.Text.Encoding.UTF8.GetBytes(response);
await webSocket.SendAsync(new System.ArraySegment<byte>(bytes),
WebSocketMessageType.Text, true, CancellationToken.None);
await Task.Delay(2000);
}
}
}
return new HttpStatusCodeResult(101);
}
Question: are there any known downsides going with way instead of handling a websocket connections in a middleware? How about the handshake, do we need to do anything else in addition to returning HTTP 101 status code?
Update 1: why not SignalR? there is no need to use fallback techniques, so while it's a good product, it see no benefit of adding additional dependency in this situation.
Update 2: one downside I've already noticed - when the while(true) exists (for simplicity reasons, not shown in an example above, let' say, when a channel needs to be closed), the methods needs to return something (Task). What it should be? HTTP 200 status response? I guess no, because in the WebSockets documentation is written, that nothing should be sent after the "close" frame.
Update 3: one thing I learned the hard way, that if you want to get WebSockets working while debugging in Visual Studio 2015 using IIS Express 10.0 on Windows 10, you still have to use https://github.com/aspnet/WebSockets and configure app.UseWebSockets() in your Startup.cs file. Otherwise, IsWebSocketRequest will be false. Anyone knows why? Handshake?
Seems fine.
You probably want to change the while(true) to while (!HttpContext.RequestAborted.IsCancellationRequested) so you detect client disconnects and end the request.
You don't need to check for null or the state of the websocket after you call accept.
I'm assuming all of that code is temporary and you'll actually be reading something from the websocket.
All the usual websocket rules apply:
Use SSL (when you're hosting it forreal)
It won't work on multiple servers (it's a point to point socket connection)
You need to support handling partial frames. You can punt this if you know the client won't send any.
I'm with some performance problems working with WEB Api. On my real/production code, I'll do a SOAP WS call, on this sample, I'll just sleep. I have 400+ clients sending request's to the Web API.
I guess it's a problem with web api, because if I open 5 process, I can handle more requests than when I'm only with one process.
My test async version of the controller looks like this
[HttpPost]
public Task<HttpResponseMessage> SampleRequest()
{
return Request.Content.ReadAsStringAsync()
.ContinueWith(content =>
{
Thread.Sleep(Timeout);
return new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StringContent(content.Result, Encoding.UTF8, "text/plain")
};
});
}
The sync version looks like this
[HttpPost]
public HttpResponseMessage SampleRequest()
{
var content = Request.Content.ReadAsStringAsync().Result;
Thread.Sleep(Timeout);
return new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StringContent(content, Encoding.UTF8, "text/plain")
};
}
My client code to this test, looks like this (it is configured to time out after 30 seconds)
for (int i = 0; i < numberOfRequests; i++)
{
tasks.Add(new Task(() =>
{
MakeHttpPostRequest();
}));
}
foreach (var task in tasks)
{
task.Start();
}
I was not able to put it here in a nice way, but the table with the results are available at github
The CPU, memory and disk IO is low. There's always at least 800 available threads (both worker and io threads)
public static void AvailableThreads()
{
int workerThreads;
int ioThreads;
ThreadPool.GetAvailableThreads(out workerThreads, out ioThreads);
Console.WriteLine("Available threads {0} ioThreads {1}", workerThreads, ioThreads);
}
I've configured the DefaultConnectionLimit
System.Net.ServicePointManager.DefaultConnectionLimit = Int32.MaxValue;
My question is why there's a queue to answer those request?
In every test, I began with a response time almost exactly like the server Thread.Sleep() time, but the responses get slower as new request arrive.
Any tip on how I can discover where's the bootleneck?
It is a .net 4.0 solution, using self host option.
Edit: I've also tested with .net 4.5 and Web API 2.0, and got the same behaviour.
First requests got the answer almost as soon as sleep expires, later it takes up to 4x the sleep time to get an answer.
Edit2: Gist of the web api1 implementation and gist of the web api2 implementation
Edit3: The MakeHttpPost method creates a new WebApiClient
Edit4:
If I change the
Thread.Sleep()
to
await Task.Delay(10000);
in the .net 4.5 version, it can handle all requests, as expected. So I don't think something related to any network issue.
Since Thread.Sleep() blocks the thread and Task.Delay don't, looks like there's an issue with webapi to consume more threads? But there's available threads in the threadpool...
Edit 5: If I open 5 servers and double the number of clients, the server can respond to all requests. So looks like it's not a problem with number of request to a server, because I can 'scale' this solution running a lot of process in different ports. It's more a problem with the number of request to the same process.
How to Check the TCP/IP stack for overloading
Run Netstat on the server having the issue, look for any Time-Waits, Fin-Wait-1, Fin-Wait-2 and RST-Wait, RST-Wait2. These are half-baked sessions whereby the stack is waiting for the other side to clean up.....OR.....the other side did send a packet in but the local machine could not process them yet depending on the Stacks' availability to do the job.
The kicker is that even sessions showing Established could be in trouble in that the time-out hasn't fired yet.
The symptoms described above are reminiscent of network or TCP/IP stack overload. Very similar behavior is seen when routers get overloaded.
I'm writing an application that proxies some HTTP requests using the ASP.NET Web API and I am struggling to identify the source of an intermittent error.
It seems like a race condition... but I'm not entirely sure.
Before I go into detail here is the general communication flow of the application:
Client makes a HTTP request to Proxy 1.
Proxy 1 relays the contents of the HTTP request to Proxy 2
Proxy 2 relays the contents of the HTTP request to the Target Web Application
Target Web App responds to the HTTP request and the response is streamed (chunked transfer) to Proxy 2
Proxy 2 returns the response to Proxy 1 which in turn responds to the original calling Client.
The Proxy applications are written in ASP.NET Web API RTM using .NET 4.5.
The code to perform the relay looks like so:
//Controller entry point.
public HttpResponseMessage Post()
{
using (var client = new HttpClient())
{
var request = BuildRelayHttpRequest(this.Request);
//HttpCompletionOption.ResponseHeadersRead - so that I can start streaming the response as soon
//As it begins to filter in.
var relayResult = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead).Result;
var returnMessage = BuildResponse(relayResult);
return returnMessage;
}
}
private static HttpRequestMessage BuildRelayHttpRequest(HttpRequestMessage incomingRequest)
{
var requestUri = BuildRequestUri();
var relayRequest = new HttpRequestMessage(incomingRequest.Method, requestUri);
if (incomingRequest.Method != HttpMethod.Get && incomingRequest.Content != null)
{
relayRequest.Content = incomingRequest.Content;
}
//Copies all safe HTTP headers (mainly content) to the relay request
CopyHeaders(relayRequest, incomingRequest);
return relayRequest;
}
private static HttpRequestMessage BuildResponse(HttpResponseMessage responseMessage)
{
var returnMessage = Request.CreateResponse(responseMessage.StatusCode);
returnMessage.ReasonPhrase = responseMessage.ReasonPhrase;
returnMessage.Content = CopyContentStream(responseMessage);
//Copies all safe HTTP headers (mainly content) to the response
CopyHeaders(returnMessage, responseMessage);
}
private static PushStreamContent CopyContentStream(HttpResponseMessage sourceContent)
{
var content = new PushStreamContent(async (stream, context, transport) =>
await sourceContent.Content.ReadAsStreamAsync()
.ContinueWith(t1 => t1.Result.CopyToAsync(stream)
.ContinueWith(t2 => stream.Dispose())));
return content;
}
The error that occurs intermittently is:
An asynchronous module or handler completed while an asynchronous operation was still pending.
This error usually occurs on the first few requests to the proxy applications after which the error is not seen again.
Visual Studio never catches the Exception when thrown.
But the error can be caught in the Global.asax Application_Error event.
Unfortunately the Exception has no Stack Trace.
The proxy applications are hosted in Azure Web Roles.
Any help identifying the culprit would be appreciated.
Your problem is a subtle one: the async lambda you're passing to PushStreamContent is being interpreted as an async void (because the PushStreamContent constructor only takes Actions as parameters). So there's a race condition between your module/handler completing and the completion of that async void lambda.
PostStreamContent detects the stream closing and treats that as the end of its Task (completing the module/handler), so you just need to be sure there's no async void methods that could still run after the stream is closed. async Task methods are OK, so this should fix it:
private static PushStreamContent CopyContentStream(HttpResponseMessage sourceContent)
{
Func<Stream, Task> copyStreamAsync = async stream =>
{
using (stream)
using (var sourceStream = await sourceContent.Content.ReadAsStreamAsync())
{
await sourceStream.CopyToAsync(stream);
}
};
var content = new PushStreamContent(stream => { var _ = copyStreamAsync(stream); });
return content;
}
If you want your proxies to scale a bit better, I also recommend getting rid of all the Result calls:
//Controller entry point.
public async Task<HttpResponseMessage> PostAsync()
{
using (var client = new HttpClient())
{
var request = BuildRelayHttpRequest(this.Request);
//HttpCompletionOption.ResponseHeadersRead - so that I can start streaming the response as soon
//As it begins to filter in.
var relayResult = await client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
var returnMessage = BuildResponse(relayResult);
return returnMessage;
}
}
Your former code would block one thread for each request (until the headers are received); by using async all the way up to your controller level, you won't block a thread during that time.
I would like to add some wisdom for anyone else who landed here with the same error, but all of your code seems fine. Look for any lambda expressions passed into functions across the call-tree from where this occurs.
I was getting this error on a JavaScript JSON call to an MVC 5.x controller action. Everything I was doing up and down the stack was defined async Task and called using await.
However, using Visual Studio's "Set next statement" feature I systematically skipped over lines to determine which one caused it. I kept drilling down into local methods until I got to a call into an external NuGet package. The called method took an Action as a parameter and the lambda expression passed in for this Action was preceded by the async keyword. As Stephen Cleary points out above in his answer, this is treated as an async void, which MVC does not like. Luckily said package had *Async versions of the same methods. Switching to using those, along with some downstream calls to the same package fixed the problem.
I realize this is not a novel solution to the problem, but I passed over this thread a few times in my searches trying to resolve the issue because I thought I didn't have any async void or async <Action> calls, and I wanted to help someone else avoid that.
A slightly simpler model is that you can actually just use the HttpContents directly and pass them around inside the relay. I just uploaded a sample illustrating how you can rely both requests and responses asynchronously and without buffering the content in a relatively simple manner:
http://aspnet.codeplex.com/SourceControl/changeset/view/7ce67a547fd0#Samples/WebApi/RelaySample/ReadMe.txt
It is also beneficial to reuse the same HttpClient instance as this allows you to reuse connections where appropriate.