I'm using PushStreamContent in ASP.NET Web API to push events from server to client (using Server-Sent Events). After each sent event, I call Flush on the Stream to push the buffered data to the client. However, I noticed that the flushing does not (always) happen. Sometimes, part of the data is sent to the client, and the rest is sent when the next event is written (which could happen seconds later).
Here's a code sample:
public class MyController : ApiController
{
private static readonly string[] LineSeparators
= new[] { Environment.NewLine };
public HttpResponseMessage GetData(string id)
{
var response = Request.CreateResponse();
response.Content = new PushStreamContent(
new Func<Stream, HttpContent, TransportContext, Task>(StartStream),
new MediaTypeHeaderValue("text/event-stream") { CharSet = "UTF-8" });
return response;
}
private async Task StartStream(Stream outputStream, HttpContent content, TransportContext context)
{
using (outputStream)
using (var writer = new StreamWriter(outputStream, new UTF8Encoding(false)))
{
writer.NewLine = "\n";
while (true)
{
WriteEvent(writer, "ping", DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture));
await Task.Delay(TimeSpan.FromSeconds(1));
}
}
}
private static void WriteEvent(TextWriter writer, string eventType, string data)
{
writer.WriteLine("event:" + eventType);
writer.WriteLine("data:" + data);
writer.WriteLine();
writer.Flush(); // StreamWriter.Flush calls Flush on underlying Stream
}
}
How can I disable the buffering of the data or force the flushing of the data?
I got it working.
In my case buffering was an isssue. I had to
1) disable gzip for my responses <urlCompression doStaticCompression="true" doDynamicCompression="false" />
2) Make sure that the proxy on Prod (Nginx) wasn't buffering either
After spending an entire day trying to figure out where the problem is, and going as far as to (desperately) giving out a bounty, I found out that the problem lied in the fact that I was using HttpSelfHostServer, and needed to configure TransferMode = TransferMode.Streamed on the HttpSelfHostConfiguration. That's all.
The source of the issue is the Stream being flushed.
At your code sample you warp the original stream with StreamWriter and then flush the StreamWriter .
You need to flush the original stream as well:
private async Task StartStream(Stream outputStream, HttpContent content, TransportContext context)
{
using (outputStream)
using (var writer = new StreamWriter(outputStream, new UTF8Encoding(false)))
{
writer.NewLine = "\n";
while (true)
{
WriteEvent(writer, "ping", DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture));
outputStream.Flush();
await Task.Delay(TimeSpan.FromSeconds(1));
}
}
}
Related
I use the code below to get an oAuth code using a browser login, it works just fine ie. auth code is returned BUT there is some timing issue between responseOutput.WriteAsync() and http.stop().
When in debug (breakpoint on **** below), then the response is returned to the browser as expected.
If I the comment out the line http.stop() (**** below), then the response is returned to the browser as expected.
BUT If I run the code as usual then the browser shows "page cannot be found", so, it looks as though responseOutput.WriteAsync() is not actually completing (not begin 'awaited').
Do I need to do anything else to ensure that the response is completely sent before stopping the listener?
await GetAuthorizationCode();
public async Task GetAuthorizationCode() {
// Creates an Listener
string redirectUri = "http://127.0.0.1:12345";
HttpListener http = new HttpListener();
http.Prefixes.Add(redirectUri);
http.Start();
// Open auth page in browser
string authUri = "https://login.microsoftonline.com/common/oauth2/v2.0/authorize?........";
var authorizationRequest = authUri;
Process.Start(authorizationRequest);
// Wait for auth response.
HttpListenerContext context = await http.GetContextAsync();
var sCode = context.Request.QueryString.Get("code");
//Send response to the browser.
HttpListenerResponse response = context.Response;
string responseString = string.Format("<html><head></head><body>Auth OK.</body></html>");
var buffer = Encoding.UTF8.GetBytes(responseString);
response.ContentLength64 = buffer.Length;
using (Stream responseOutput = response.OutputStream)
{
await responseOutput.WriteAsync(buffer, 0, buffer.Length);
responseOutput.Close();
}
//****** this line causes problems
http.Stop();
AuthorizationCode = sCode;
}
It seems that I have to set KeepAlive prior to http.stop()
response.KeepAlive = False
Somehow even with calling response.close and/or with a 'using' block around the response it still needed this setting.
From an old http handler I made some time ago, I have this code:
protected static void WriteString(HttpListenerResponse response, HttpContentType contentType, Encoding encoding, string content)
{
byte[] outputBuffer = encoding.GetBytes(content);
response.ContentType = contentType.Value + "; charset=" + encoding.BodyName;
response.ContentLength64 = outputBuffer.Length;
response.Close(outputBuffer, true);
}
This code have been active on a program that usually stayed days if not weeks serving few thousand requests each day and never had any kind of issues regarding memory leaks.
From docs:
You can customize the response by setting various properties, such as
StatusCode, StatusDescription, and Cookies. Use the
HttpListenerResponse.OutputStream property to obtain a Stream instance
to which response data can be written. Finally, send the response data
to the client by calling the Close method.
https://learn.microsoft.com/en-us/dotnet/api/system.net.httplistenerresponse?view=net-6.0
Oposed to my though, the HttpListener docs states to close the output stream:
https://learn.microsoft.com/en-us/dotnet/api/system.net.httplistener?view=net-6.0
Which is quite confusing:
System.IO.Stream output = response.OutputStream;
output.Write(buffer,0,buffer.Length);
// You must close the output stream.
output.Close();
listener.Stop();
I would suggest to try the follwing code:
using (Stream responseOutput = response.OutputStream)
{
await responseOutput.WriteAsync(buffer, 0, buffer.Length);
//The following two lines are possibly unnecesary in an unsing statement
//responseOutput.Flush(); //Ensure content placed on the right place
//responseOutput.Close();
}
response.Close();
http.Stop();
--- Update ---
I could run the following code under .Net6.0 without any issue and got the response content on the browser:
class Program
{
static async Task Main(string[] args)
{
await RunUntilServeOneRequest();
}
private static async Task RunUntilServeOneRequest()
{
Console.WriteLine("Starting...");
// Creates an Listener
string listenUri = "http://127.0.0.1:12346/";
HttpListener http = new HttpListener();
http.Prefixes.Add(listenUri);
http.Start();
// Wait for request.
Console.WriteLine("Awaiting request to " + listenUri);
HttpListenerContext context = await http.GetContextAsync();
//Send response to the browser.
HttpListenerResponse response = context.Response;
string responseString = string.Format($"<html><head></head><body>Hello world: {DateTime.Now.ToString("yyyy/MM/dd HH:mm:ss")}</body></html>");
byte[] buffer = Encoding.UTF8.GetBytes(responseString);
response.ContentLength64 = buffer.Length;
using (Stream responseOutput = response.OutputStream)
{
await responseOutput.WriteAsync(buffer, 0, buffer.Length);
responseOutput.Close();
}
response.Close();
Console.WriteLine("Request answered");
http.Stop();
}
}
Good day,
I have an issue with a custom response in API Gateway Ocelot with Middleware.
inside FormatResponse(context.Response) I change response for specific endpoint and I see the new response on debug but I receive the original Response in final result on postman.
ex :
original Response
{
"name":"mo"
}
after a change will be
{
"name":"mo123"
}
my code
// .NET Core 3.1
public async Task InvokeAsync(HttpContext context)
{
context.Request.EnableBuffering();
var builder = new StringBuilder();
var request = await FormatRequest(context.Request);
builder.Append("Request: ").AppendLine(request);
builder.AppendLine("Request headers:");
foreach (var header in context.Request.Headers)
{
builder.Append(header.Key).Append(':').AppendLine(header.Value);
}
//Copy a pointer to the original response body stream
var originalBodyStream = context.Response.Body;
//Create a new memory stream...
using var responseBody = new MemoryStream();
//...and use that for the temporary response body
context.Response.Body = responseBody;
//Continue down the Middleware pipeline, eventually returning to this class
await _next(context);
//Format the response from the server
var response = await FormatResponse(context.Response); // here ,i see new respose
builder.Append("Response: ").AppendLine(response);
builder.AppendLine("Response headers: ");
foreach (var header in context.Response.Headers)
{
builder.Append(header.Key).Append(':').AppendLine(header.Value);
}
//Save log to chosen datastore
_logger.LogInformation(builder.ToString());
//Copy the contents of the new memory stream (which contains the response) to the original
// stream, which is then returned to the client.
await responseBody.CopyToAsync(originalBodyStream);
}
private async Task<string> FormatResponse(HttpResponse response)
{
//We need to read the response stream from the beginning...
response.Body.Seek(0, SeekOrigin.Begin);
//...and copy it into a string
string text = await new StreamReader(response.Body).ReadToEndAsync();
text = CustomRes(text); //************************ here, I change response
//We need to reset the reader for the response so that the client can read it.
response.Body.Seek(0, SeekOrigin.Begin);
//Return the string for the response, including the status code (e.g. 200, 404, 401, etc.)
return $"{response.StatusCode}: {text}";
}
Reference: https://www.abhith.net/blog/dotnet-core-api-gateway-ocelot-logging-http-requests-response-including-headers-body/
The best answer from Richard Deeming:
https://www.codeproject.com/Questions/5294847/Fix-issue-with-custom-response-NET-core-API-gatewa
I am developing a .net core middle-ware (api) and thinking to use pipes with following flow, Can someone tell me is this is a good approach and comply best practices or should i use different strategy.
Request comes to api
Authorization pipe validates the request.
Request pipe logs the request into db.
Request goes to api and perform action and return a result.
Response pipe gets the response and logs into db and return the result to client.
I know that we can read stream only time (point 3) but i figured this out already and after reading i have attach it to request stream again.
So, confusion is where to write the response? In api? or in separate pipe.
If i do it in separate pipe then i am handling my response two time (one is creating response in api, second is reading response in separate pipe) which is a performance hit.
Can i pass the data from point number 4 to 5 like from api to my pipe and from there that response should added into response stream and if it is correct then how can i pass the data from api to pipe?
Yes, response stream can only be read once. You can use the MemoryStream to load the response , reference article :
First, read the request and format it into a string.
Next, create a dummy MemoryStream to load the new response into.
Then, wait for the server to return a response.
Finally, copy the dummy MemoryStream (containing the actual response) into the original stream, which gets returned to the client.
Code sample :
public class RequestResponseLoggingMiddleware
{
private readonly RequestDelegate _next;
public RequestResponseLoggingMiddleware(RequestDelegate next)
{
_next = next;
}
public async Task Invoke(HttpContext context)
{
//First, get the incoming request
var request = await FormatRequest(context.Request);
//Copy a pointer to the original response body stream
var originalBodyStream = context.Response.Body;
//Create a new memory stream...
using (var responseBody = new MemoryStream())
{
//...and use that for the temporary response body
context.Response.Body = responseBody;
//Continue down the Middleware pipeline, eventually returning to this class
await _next(context);
//Format the response from the server
var response = await FormatResponse(context.Response);
//TODO: Save log to chosen datastore
//Copy the contents of the new memory stream (which contains the response) to the original stream, which is then returned to the client.
await responseBody.CopyToAsync(originalBodyStream);
}
}
private async Task<string> FormatRequest(HttpRequest request)
{
var body = request.Body;
//This line allows us to set the reader for the request back at the beginning of its stream.
request.EnableRewind();
//We now need to read the request stream. First, we create a new byte[] with the same length as the request stream...
var buffer = new byte[Convert.ToInt32(request.ContentLength)];
//...Then we copy the entire request stream into the new buffer.
await request.Body.ReadAsync(buffer, 0, buffer.Length);
//We convert the byte[] into a string using UTF8 encoding...
var bodyAsText = Encoding.UTF8.GetString(buffer);
//..and finally, assign the read body back to the request body, which is allowed because of EnableRewind()
request.Body = body;
return $"{request.Scheme} {request.Host}{request.Path} {request.QueryString} {bodyAsText}";
}
private async Task<string> FormatResponse(HttpResponse response)
{
//We need to read the response stream from the beginning...
response.Body.Seek(0, SeekOrigin.Begin);
//...and copy it into a string
string text = await new StreamReader(response.Body).ReadToEndAsync();
//We need to reset the reader for the response so that the client can read it.
response.Body.Seek(0, SeekOrigin.Begin);
//Return the string for the response, including the status code (e.g. 200, 404, 401, etc.)
return $"{response.StatusCode}: {text}";
}
}
And register the middleware :
app.UseMiddleware<RequestResponseLoggingMiddleware>();
Basically I want to stream a file to a web API and once inside the web api controller I would like to pass data as it comes in to lower level logic via a stream reader. I tried the code below found from another SO post with some modification, but I get:
An asynchronous module or handler completed while an asynchronous
operation was still pending.
public async void Put(int id, HttpRequestMessage request)
{
if (!Request.Content.IsMimeMultipartContent())
throw new InvalidOperationException();
var provider = new MultipartMemoryStreamProvider();
await Request.Content.ReadAsMultipartAsync(provider);
var file = provider.Contents.First();
var filename = file.Headers.ContentDisposition.FileName.Trim('\"');
var buffer = await file.ReadAsByteArrayAsync();
var stream = new MemoryStream(buffer);
using (var s = new StreamReader(stream))
{
saveFile.Execute(id, s);
}
}
I'm open to other solutions as long as I am streaming the data as it comes in. I'm new to await and async and I'm probably making a basic mistake. Any ideas?
Change async void to async Task
I have a fairly bog standard .net MVC 4 Web API application.
public class LogsController : ApiController
{
public HttpResponseMessage PostLog(List<LogDto> logs)
{
if (logs != null && logs.Any())
{
var goodLogs = new List<Log>();
var badLogs = new List<LogBad>();
foreach (var logDto in logs)
{
if (logDto.IsValid())
{
goodLogs.Add(logDto.ToLog());
}
else
{
badLogs.Add(logDto.ToLogBad());
}
}
if (goodLogs.Any())
{
_logsRepo.Save(goodLogs);
}
if(badLogs.Any())
{
_logsBadRepo.Save(badLogs);
}
}
return new HttpResponseMessage(HttpStatusCode.OK);
}
}
This all work fine, I have devices that are able to send me their logs and it works well. However now we are starting to have concerns about the size of the data being transferred, and we want to have a look at accepting post that have been compressed using GZIP?
How would I go about do this? Is it setting in IIS or could I user Action Filters?
EDIT 1
Following up from Filip's answer my thinking is that I need to intercept the processing of the request before it gets to my controller. If i can catch the request before the Web api framework attempts to parse the body of the request into my business object, which fails because the body of the request is still compressed. Then I can decompress the body of the request and then pass the request back into the processing chain, and hopefully the Web Api framework will be able to parse the (decompressed) body into my business objects.
It looks Like using the DelagatingHandler is the way to go. It allows me access to the request during the processing, but before my controller. So I tried the following?
public class gZipHandler : DelegatingHandler
{
protected override System.Threading.Tasks.Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
{
string encodingType = request.Headers.AcceptEncoding.First().Value;
request.Content = new DeCompressedContent(request.Content, encodingType);
return base.SendAsync(request, cancellationToken);
}
}
public class DeCompressedContent : HttpContent
{
private HttpContent originalContent;
private string encodingType;
public DeCompressedContent(HttpContent content, string encodType)
{
originalContent = content;
encodingType = encodType;
}
protected override bool TryComputeLength(out long length)
{
length = -1;
return false;
}
protected override Task<Stream> CreateContentReadStreamAsync()
{
return base.CreateContentReadStreamAsync();
}
protected override Task SerializeToStreamAsync(Stream stream, TransportContext context)
{
Stream compressedStream = null;
if (encodingType == "gzip")
{
compressedStream = new GZipStream(stream, CompressionMode.Decompress, leaveOpen: true);
}
return originalContent.CopyToAsync(compressedStream).ContinueWith(tsk =>
{
if (compressedStream != null)
{
compressedStream.Dispose();
}
});
}
}
}
This seems to be working ok. The SendAsync method is being called before my controller and the constructor for the DecompressedContent is being called. However the SerializeToStreamAsync is never being called so I added the CreateContentReadStreamAsync to see if that's where the decompressing should be happening, but that's not being called either.
I fell like I am close to the solution, but just need a little bit extra to get it over the line.
I had the same requirement to POST gzipped data to a .NET web api controller. I came up with this solution:
public class GZipToJsonHandler : DelegatingHandler
{
protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request,
CancellationToken cancellationToken)
{
// Handle only if content type is 'application/gzip'
if (request.Content.Headers.ContentType == null ||
request.Content.Headers.ContentType.MediaType != "application/gzip")
{
return base.SendAsync(request, cancellationToken);
}
// Read in the input stream, then decompress in to the outputstream.
// Doing this asynronously, but not really required at this point
// since we end up waiting on it right after this.
Stream outputStream = new MemoryStream();
Task task = request.Content.ReadAsStreamAsync().ContinueWith(t =>
{
Stream inputStream = t.Result;
var gzipStream = new GZipStream(inputStream, CompressionMode.Decompress);
gzipStream.CopyTo(outputStream);
gzipStream.Dispose();
outputStream.Seek(0, SeekOrigin.Begin);
});
// Wait for inputstream and decompression to complete. Would be nice
// to not block here and work async when ready instead, but I couldn't
// figure out how to do it in context of a DelegatingHandler.
task.Wait();
// This next section is the key...
// Save the original content
HttpContent origContent = request.Content;
// Replace request content with the newly decompressed stream
request.Content = new StreamContent(outputStream);
// Copy all headers from original content in to new one
foreach (var header in origContent.Headers)
{
request.Content.Headers.Add(header.Key, header.Value);
}
// Replace the original content-type with content type
// of decompressed data. In our case, we can assume application/json. A
// more generic and reuseable handler would need some other
// way to differentiate the decompressed content type.
request.Content.Headers.Remove("Content-Type");
request.Content.Headers.Add("Content-Type", "application/json");
return base.SendAsync(request, cancellationToken);
}
}
Using this approach, the existing controller, which normally works with JSON content and automatic model binding, continued to work without any changes.
I'm not sure why the other answer was accepted. It provides a solution for handling the responses (which is common), but not requests (which is uncommon). The Accept-Encoding header is used to specify acceptable response encodings, and is not related to request encodings.
I believe the correct answer is Kaliatech's and I would have left this as a comment and voted his up is I had enough reputation points, since I think his is basically correct.
However, my situation called for the need to look at the encoding type type rather than the content type. Using this approach the calling system can still specify that the content type is json/xml/etc in the content type, but specify that the data is encoded using gzip or potentially another encoding/compression mechanism. This prevented me from needing to change the content type after decoding the input and allows any content type information to flow through in its original state.
Here's the code. Again, 99% of this is Kaliatech's answer including the comments, so please vote his post up if this is useful.
public class CompressedRequestHandler : DelegatingHandler
{
protected override System.Threading.Tasks.Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
{
if (IsRequetCompressed(request))
{
request.Content = DecompressRequestContent(request);
}
return base.SendAsync(request, cancellationToken);
}
private bool IsRequetCompressed(HttpRequestMessage request)
{
if (request.Content.Headers.ContentEncoding != null &&
request.Content.Headers.ContentEncoding.Contains("gzip"))
{
return true;
}
return false;
}
private HttpContent DecompressRequestContent(HttpRequestMessage request)
{
// Read in the input stream, then decompress in to the outputstream.
// Doing this asynronously, but not really required at this point
// since we end up waiting on it right after this.
Stream outputStream = new MemoryStream();
Task task = request.Content.ReadAsStreamAsync().ContinueWith(t =>
{
Stream inputStream = t.Result;
var gzipStream = new GZipStream(inputStream, CompressionMode.Decompress);
gzipStream.CopyTo(outputStream);
gzipStream.Dispose();
outputStream.Seek(0, SeekOrigin.Begin);
});
// Wait for inputstream and decompression to complete. Would be nice
// to not block here and work async when ready instead, but I couldn't
// figure out how to do it in context of a DelegatingHandler.
task.Wait();
// Save the original content
HttpContent origContent = request.Content;
// Replace request content with the newly decompressed stream
HttpContent newContent = new StreamContent(outputStream);
// Copy all headers from original content in to new one
foreach (var header in origContent.Headers)
{
newContent.Headers.Add(header.Key, header.Value);
}
return newContent;
}
I then registered this handler globally, which could be a dicey proposition if you are vulnerable to DoS attacks, but our service is locked down, so it works for us
GlobalConfiguration.Configuration.MessageHandlers.Add(new CompressedRequestHandler());
While Web API doesn't support Accept-Encoding header out of the box, but Kiran has a terrific blog post on how to do that - http://blogs.msdn.com/b/kiranchalla/archive/2012/09/04/handling-compression-accept-encoding-sample.aspx - using a custom MessageHandler
If you implement his solution, all you need to do is issue a request with Accept-Encoding: gzip or Accept-Encoding: deflate header and the Web API response will be compressed in the message handler for you.
try this
public class DeCompressedContent : HttpContent
{
private HttpContent originalContent;
private string encodingType;
/// <summary>
///
/// </summary>
/// <param name="content"></param>
/// <param name="encodingType"></param>
public DeCompressedContent(HttpContent content, string encodingType)
{
if (content == null) throw new ArgumentNullException("content");
if (string.IsNullOrWhiteSpace(encodingType)) throw new ArgumentNullException("encodingType");
this.originalContent = content;
this.encodingType = encodingType.ToLowerInvariant();
if (!this.encodingType.Equals("gzip", StringComparison.CurrentCultureIgnoreCase) && !this.encodingType.Equals("deflate", StringComparison.CurrentCultureIgnoreCase))
{
throw new InvalidOperationException(string.Format("Encoding {0} is not supported. Only supports gzip or deflate encoding", this.encodingType));
}
foreach (KeyValuePair<string, IEnumerable<string>> header in originalContent.Headers)
{
this.Headers.TryAddWithoutValidation(header.Key, header.Value);
}
this.Headers.ContentEncoding.Add(this.encodingType);
}
/// <summary>
///
/// </summary>
/// <param name="stream"></param>
/// <param name="context"></param>
/// <returns></returns>
protected override Task SerializeToStreamAsync(Stream stream, TransportContext context)
{
var output = new MemoryStream();
return this.originalContent
.CopyToAsync(output).ContinueWith(task =>
{
// go to start
output.Seek(0, SeekOrigin.Begin);
if (this.encodingType.Equals("gzip", StringComparison.CurrentCultureIgnoreCase))
{
using (var dec = new GZipStream(output, CompressionMode.Decompress))
{
dec.CopyTo(stream);
}
}
else
{
using (var def = new DeflateStream(output, CompressionMode.Decompress))
{
def.CopyTo(stream);
}
}
if (output != null)
output.Dispose();
});
}
/// <summary>
///
/// </summary>
/// <param name="length"></param>
/// <returns></returns>
protected override bool TryComputeLength(out long length)
{
length = -1;
return (false);
}
}