I am trying to make multiple identical requests to a REST web service. The issue I am having is that for each request it seems like a new socket is opened which makes the operation much slower (~x10). (when compared to the same operation using a SOAP proxy channel).
I have looked into HttpWebRequest.KeepAlive, but I can't call GetResponse() on the same web request multiple times.
This snippet below is the idea of what I need, and yes I know it will not work because of the reasons I mentioned above:
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(serviceUri);
req.KeepAlive = true;
var dcs = new DataContractSerializer(typeof(Test));
while (enabled)
{
var stream = req.GetResponse().GetResponseStream();
if (stream != null)
{
var test = (Test)dcs.ReadObject(stream);
counter++;
}
}
EDIT: This is the loop I am using for the SOAP test:
private void SoapLoop()
{
IService1 proxy =
ChannelFactory<IService1>.CreateChannel(
tcpBinding, endpointAddress);
while (enabled)
{
var test = proxy.GetRead(new GetReadRequest());
counter++;
}
}
The object I am transfering is the same in both SOAP and REST, and is ~300KB.
EDIT2: I did some further tests:
on small objects, e.g.: 100 bytes REST outperforms SOAP (~2 to 1), but on large objects (objects with large image byte arrays) SOAP is much faster.
Another odd thing is that when I comment out the line var test = (Test)dcs.ReadObject(stream); in the REST loop, the performance actually goes down :S.
Close the first response before you open the new one. Please consider to put GetXYZ() inside a using statement. KeepAlive is true by default.
Related
Problem
I am trying to upload some data to a web-service.
I want to upload the data in chunks, and have the web-service read each chunk in turn. However, what I find in practice is that the web-service will only read a full buffer at a time.
Is there a way to get WebAPI (running self-hosted by Owin ideally, but I can use IIS if necessary) to respect the transfer chunks?
I have verified in Wireshark that my client is sending the data chunked hence why I believe this is a WebAPI issue.
For clarity, streaming data in the response works absolutely fine - my question is about reading chunked data from the request stream.
Code
The controller looks like this:
using System;
using System.Net;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
using System.Web.Http;
public class StreamingController : ApiController
{
[HttpPost]
public async Task<HttpResponseMessage> Upload()
{
var stream = await this.Request.Content.ReadAsStreamAsync();
var data = new byte[20];
int chunkCount = 1;
while (true)
{
// I was hoping that every time I sent a chunk, then
// ReadAsync would return, but I find that it will only
// return when I have sent 20 bytes of data.
var bytesRead = await stream.ReadAsync(data, 0, data.Length);
if (bytesRead <= 0)
{
break;
}
Console.WriteLine($"{chunkCount++}: {Encoding.UTF8.GetString(data)}");
}
return new HttpResponseMessage(HttpStatusCode.OK);
}
}
My test client looks like this:
void Main()
{
var url = "http://localhost:6001/streaming/upload";
var relayRequest = (HttpWebRequest)HttpWebRequest.Create(url);
relayRequest.Method = "POST";
relayRequest.AllowWriteStreamBuffering = false;
relayRequest.AllowReadStreamBuffering = false;
relayRequest.SendChunked = true;
relayRequest.ContentType = "application/octet-stream";
var stream = relayRequest.GetRequestStream();
string nextLine;
int totalBytes = 0;
// Read a series of lines from the console and transmit them to the server.
while(!string.IsNullOrEmpty((nextLine = Console.ReadLine())))
{
var bytes = Encoding.UTF8.GetBytes(nextLine);
totalBytes += bytes.Length;
Console.WriteLine(
"CLIENT: Sending {0} bytes ({1} total)",
bytes.Length,
totalBytes);
stream.Write(bytes, 0, bytes.Length);
stream.Flush();
}
var response = relayRequest.GetResponse();
Console.WriteLine(response);
}
Justification
My specific motivation is I am writing a HTTPS tunnel for an RTP client. However, this question would also make sense in the context of an instant-messaging chat application. You wouldn't want a partial chat message to come through, and then have to wait for message 2 to find out the end of message 1...!
The decoding of Transfer-Encoding: chunked happens a long way away from your controllers. Depending on your host, it may not even happen in the application at all, but be handled by the http.sys pipeline API that most servers plug into.
For your application to even have a chance of looking into this data, you'll need to move away from IIS/HttpListener and use Sockets instead.
Of interest might be the Nowin project, that provides all the OWIN features without using HttpListener, instead relying on the Socket async APIs. I don't know much about it, but there might be hooks to get at the stream before it gets decoded... Seems like a lot of effort though.
My application has an "export" feature. In terms of functionality, it works like this:
When the user presses the "Export" button (after configuring the options etc.), the application first runs a relatively quick query that determines the IDs of all the objects need to be exported. Then, for each object, it executes a calculation that can a relatively long time to finish (up to 1s per object). While this is happening, the user is watching a progress bar -- which is easy to render, since we know the expected number of objects, as well as how many objects have been processed so far.
I would like to move this functionality to the webservice, for all the usual reasons. However, one additional wrinkle in this process is that our users often have a lot of network latency. Thus, I can't afford to make 1000 requests if I have 1000 rows to process.
What I'd like to do is to return a custom stream from the service. I can write the row count into the first 4 bytes of the stream. The client will read these 4 bytes, initialize the progress bar, and then proceed to read the stream and deserialize them on the fly, updating the progress bar as it deserializes each one. Meanwhile, the server will write objects into the stream as they become available.
To make matters more interesting, since I'm sending back a long list of objects, I would really like to use protobuf-net to reduce the overhead. Hence, I have several questions:
Is what I am planning to do even possible ? Does it make sense, or is there a better way ?
How can I return a custom stream from a ServiceStack service ?
When I am deserializing a stream of objects on the client side, how can I get some sort of a notification as each object is deserialized ? I need it to update the progress bar.
I found this answer, which kind of does what I want, but doesn't truly address my questions: Lazy, stream driven object serialization with protobuf-net
EDIT: I should have mentioned that my client is a desktop C# application, which uses ServiceStack and protobuf.net .
I recommend paging the result set over multiple requests (i.e. using Skip/Take) instead of trying to return a stream of results which will require custom response, custom serialization and custom clients to consume the streaming response. This is a more stateless approach which is more suitable over HTTP where each query can be cached independently, better support for retrying i.e. if there was an error with one of the requests you can retry from the last successful response (i.e. instead of having to download the entire request again) and better debuggability and introspection with existing HTTP tools.
Custom Streaming response
Here's an example that shows how to return an Observable StreamWriter and a custom Observable client to consume the streamed response: https://gist.github.com/bamboo/5078236
It uses custom JSON serialization to ensure that each element is written before it's flushed to the stream so the client consuming the stream can expect each read to retrieve an entire record. This custom serialization would be more difficult if using a binary serializer like protocol buffers.
Returning Binary and Stream responses in ServiceStack
The ImageService shows different ways of returning binary or Stream responses in ServiceStack:
Returning a Stream in a HttpResult
public object Any(ImageAsStream request)
{
using (var image = new Bitmap(100, 100))
{
using (var g = Graphics.FromImage(image))
{
g.Clear(request.Format.ToImageColor());
}
var ms = new MemoryStream();
image.Save(ms, request.Format.ToImageFormat());
return new HttpResult(ms, request.Format.ToImageMimeType());
}
}
Returning raw byte[]
public object Any(ImageAsBytes request)
{
using (var image = new Bitmap(100, 100))
{
using (var g = Graphics.FromImage(image))
{
g.Clear(request.Format.ToImageColor());
}
using (var m = new MemoryStream())
{
image.Save(m, request.Format.ToImageFormat());
var imageData = m.ToArray(); //buffers
return new HttpResult(imageData, request.Format.ToImageMimeType());
}
}
}
The examples above show how you can add additional metadata to the HTTP Response by wrapping the Stream and byte[] responses in a HttpResult, but if you prefer you can also return the byte[], Stream or IStreamWriter responses directly.
Writing directly to the Response Stream
public void Any(ImageWriteToResponse request)
{
using (var image = new Bitmap(100, 100))
{
using (var g = Graphics.FromImage(image))
{
g.Clear(request.Format.ToImageColor());
}
base.Response.ContentType = request.Format.ToImageMimeType();
image.Save(base.Response.OutputStream, request.Format.ToImageFormat());
base.Response.Close();
}
}
Returning a Custom Result
public object Any(ImageAsCustomResult request)
{
var image = new Bitmap(100, 100);
using (var g = Graphics.FromImage(image))
{
g.Clear(request.Format.ToImageColor());
return new ImageResult(image, request.Format.ToImageFormat());
}
}
Where you can write to the response stream directly by implementing IStreamWriter.WriteTo():
//Your own Custom Result, writes directly to response stream
public class ImageResult : IDisposable, IStreamWriter, IHasOptions
{
private readonly Image image;
private readonly ImageFormat imgFormat;
public ImageResult(Image image, ImageFormat imgFormat = null)
{
this.image = image;
this.imgFormat = imgFormat ?? ImageFormat.Png;
this.Options = new Dictionary<string, string> {
{ HttpHeaders.ContentType, this.imgFormat.ToImageMimeType() }
};
}
public void WriteTo(Stream responseStream)
{
using (var ms = new MemoryStream())
{
image.Save(ms, imgFormat);
ms.WriteTo(responseStream);
}
}
public void Dispose()
{
this.image.Dispose();
}
public IDictionary<string, string> Options { get; set; }
}
I am writing test harness to test a HTTP Post. Test case would send 8 http request using UploadValuesAsync in webclient class in 10 seconds interval. It sleeps 10 seconds after every 8 request. I am recording start time and end time of each request. When I compute the average response time. I am getting around 800 ms. But when I run this test case synchronously using UploadValues method in web client I am getting average response time 250 milliseconds. Can you tell me why is difference between these two methods? I was expecting the less response time in Aync but I did not get that.
Here is code that sends 8 requests async
var count = 0;
foreach (var nameValueCollection in requestCollections)
{
count++;
NameValueCollection collection = nameValueCollection;
PostToURL(collection,uri);
if (count % 8 == 0)
{
Thread.Sleep(TimeSpan.FromSeconds(10));
count = 0;
}
}
UPDATED
Here is code that sends 8 requests SYNC
public void PostToURLSync(NameValueCollection collection,Uri uri)
{
var response = new ServiceResponse
{
Response = "Not Started",
Request = string.Join(";", collection.Cast<string>()
.Select(col => String.Concat(col, "=", collection[col])).ToArray()),
ApplicationId = collection["ApplicationId"]
};
try
{
using (var transportType2 = new DerivedWebClient())
{
transportType2.Expect100Continue = false;
transportType2.Timeout = TimeSpan.FromMilliseconds(2000);
response.StartTime = DateTime.Now;
var responeByte = transportType2.UploadValues(uri, "POST", collection);
response.EndTime = DateTime.Now;
response.Response = Encoding.Default.GetString(responeByte);
}
}
catch (Exception exception)
{
Console.WriteLine(exception.ToString());
}
response.ResponseInMs = (int)response.EndTime.Subtract(response.StartTime).TotalMilliseconds;
responses.Add(response);
Console.WriteLine(response.ResponseInMs);
}
Here is the code that post to the HTTP URI
public void PostToURL(NameValueCollection collection,Uri uri)
{
var response = new ServiceResponse
{
Response = "Not Started",
Request = string.Join(";", collection.Cast<string>()
.Select(col => String.Concat(col, "=", collection[col])).ToArray()),
ApplicationId = collection["ApplicationId"]
};
try
{
using (var transportType2 = new DerivedWebClient())
{
transportType2.Expect100Continue = false;
transportType2.Timeout = TimeSpan.FromMilliseconds(2000);
response.StartTime = DateTime.Now;
transportType2.UploadValuesCompleted += new UploadValuesCompletedEventHandler(transportType2_UploadValuesCompleted);
transportType2.UploadValuesAsync(uri, "POST", collection,response);
}
}
catch (Exception exception)
{
Console.WriteLine(exception.ToString());
}
}
Here is the upload completed event
private void transportType2_UploadValuesCompleted(object sender, UploadValuesCompletedEventArgs e)
{
var now = DateTime.Now;
var response = (ServiceResponse)e.UserState;
response.EndTime = now;
response.ResponseInMs = (int) response.EndTime.Subtract(response.StartTime).TotalMilliseconds;
Console.WriteLine(response.ResponseInMs);
if (e.Error != null)
{
response.Response = e.Error.ToString();
}
else
if (e.Result != null && e.Result.Length > 0)
{
string downloadedData = Encoding.Default.GetString(e.Result);
response.Response = downloadedData;
}
//Recording response in Global variable
responses.Add(response);
}
One problem you're probably running into is that .NET, by default, will throttle outgoing HTTP connections to the limit (2 concurrent connections per remote host) that are mandated by the relevant RFC. Assuming 2 concurrent connections and 250ms per request, that means the response time for your first 2 requests will be 250ms, the second 2 will be 500ms, the third 750ms, and the last 1000ms. This would yield a 625ms average response time, which is not far from the 800ms you're seeing.
To remove the throttling, increase ServicePointManager.DefaultConnectionLimit to the maximum number of concurrent connections you want to support, and you should see your average response time go down alot.
A secondary problem may be that the server itself is slower handling multiple concurrent connections than handing one request at a time. Even once you unblock the throttling problem above, I'd expect each of the async requests to, on average, execute somewhat slower than if the server was only executing one request at a time. How much slower depends on how well the server is optimized for concurrent requests.
A final problem may be caused by test methodology. For example, if your test client is simulating a browser session by storing cookies and re-sending cookies with each request, that may run into problems with some servers that will serialize requests from a single user. This is often a simplification for server apps so they won't have to deal with locking cross-requests state like session state. If you're running into this problem, make sure that each WebClient sends different cookies to simulate different users.
I'm not saying that you're running into all three of these problems-- you might be only running into 1 or 2-- but these are the most likley culprits for the problem you're seeing.
As Justin said, I tried ServicePointManager.DefaultConnectionLimit but that did not fix the issue. I could not able reproduce other problems suggested by Justin. I am not sure how to reproduce them in first place.
What I did, I ran the same piece of code in peer machine that runs perfectly response time that I expected. The difference between the two machines is operating systems. Mine is running on Windows Server 2003 and other machine is running on Windows Server 2008.
As it worked on the other machines, I suspect that it might be one of the problem specified by Justin or could be server settings on 2003 or something else. I did not spend much time after that to dig this issue. As this is a test harness that we had low priority on this issue. We left off with no time further.
As I have no glue on what exactly fixed it, I am not accepting any answer other than this. Becuase at very least I know that switching to server 2008 fixed this issue.
i have this method:
private void sendSms(object url)
{
var Url = url.ToString();
webRequest = WebRequest.Create(Url);
// webRequest.BeginGetResponse(this.RespCallback, webRequest);
webResponse = webRequest.GetResponse();
// End the Asynchronous response.
var stream = new StreamReader(webResponse.GetResponseStream());
var response = stream.ReadToEnd().ToString();
if (response.Contains(Config.ValidResponse))
{
var queryString = HttpUtility.ParseQueryString(webRequest.RequestUri.Query);
OnMessageAccepted(this, new MessageAcceptedEventArgs(queryString["SN"], "n/a"));
}
else
{
OnMessageAccepted(this, new MessageAcceptedEventArgs("", "n/a"));
}
}
which i call inside a loop like this
While (true)
{
Send(url);
sleep(400);
}
Problem is after couples of hundreds of calls like 500 or 600 the performance of the calls gets slower and slower if i restart application it start so fast and good but then start slowing down ! i was wondering if there is any buffer or cache i should clear every now and then to make it stay fast ?
ps: i developed the server so im sure the server doesnt slow it down plus i tried that with different kind of server implementation that i developed and developed by others.
thanks in advance.
You need to dispose the response and response stream using using blocks.
Another question about Web proxy.
Here is my code:
IWebProxy Proxya = System.Net.WebRequest.GetSystemWebProxy();
Proxya.Credentials = CredentialCache.DefaultNetworkCredentials;
HttpWebRequest rqst = (HttpWebRequest)WebRequest.Create(targetServer);
rqst.Proxy = Proxya;
rqst.Timeout = 5000;
try
{
rqst.GetResponse();
}
catch(WebException wex)
{
connectErrMsg = wex.Message;
proxyworks = false;
}
This code hangs the first time it is called for a minute of two. After that on successive calls it works sometimes, but not others. It also never hits the catch block.
Now the weird part. If I add a MessageBox.Show(msg) call in the first section of code before the GetResponse() call this all will work every time without hanging. Here is an example:
try
{
// ========Here is where I make the call and get the response========
System.Windows.Forms.MessageBox.Show("Getting Response");
// ========This makes the whole thing work every time========
rqst.GetResponse();
}
catch(WebException wex)
{
connectErrMsg = wex.Message;
proxyworks = false;
}
I'm baffled about why it is behaving this way. I don't know if the timeout is not working (it's in milliseconds, not seconds, so should timeout after 5 seconds, right?...) or what is going on. The most confusing this is that the message box call makes it all work without hanging.
So any help and suggestions on what is happening is appreciated. These are the kind of bugs that drive me absolutely out of my mind.
EDIT and CORRECTION:
OK, so I've been testing this and the problem is caused when I try to download data from the URI that I am getting a response from. I am testing the connectivity using the GetResponse() method with a WebRequest, but am downloading the data with a WebClient. Here is the code for that:
public void LoadUpdateDataFromNet(string url, IWebProxy wProxy)
{
//Create web client
System.Net.WebClient webClnt = new System.Net.WebClient();
//set the proxy settings
webClnt.Proxy = wProxy;
webClnt.Credentials = wProxy.Credentials;
byte[] tempBytes;
//download the data and put it into a stream for reading
try
{
tempBytes = webClnt.DownloadData(url); // <--HERE IS WHERE IT HANGS
}
catch (WebException wex)
{
MessageBox.Show("NEW ERROR: " + wex.Message);
return;
}
//Code here that uses the downloaded data
}
The WebRequest and WebClient are both accessing the same URL which is a web path to an XML file and the proxy is the same one created in the method at the top of this post. I am testing to see if the created IWebProxy is valid for the specified path and file and then downloading the file.
The first piece of code I put above and this code using the WebClient are in separate classes and are called at different times, yet using a message box in the first bit of code still makes the whole thing run fine, which confuses me. Not sure what all is happening here or why message boxes and running/debugging in Visual Studio makes the program run OK. Suggestions?
So, I figured out the answer to the problem. The timeout for the we request is still 5 sec, but for some reason if it is not closed explicitly it makes consecutive web requests hang. Here is the code now:
IWebProxy Proxya = System.Net.WebRequest.GetSystemWebProxy();
//to get default proxy settings
Proxya.Credentials = CredentialCache.DefaultNetworkCredentials;
Uri targetserver = new Uri(targetAddress);
Uri proxyserver = Proxya.GetProxy(targetserver);
HttpWebRequest rqst = (HttpWebRequest)WebRequest.Create(targetserver);
rqst.Proxy = Proxya;
rqst.Timeout = 5000;
try
{
//Get response to check for valid proxy and then close it
WebResponse wResp = rqst.GetResponse();
//===================================================================
wResp.Close(); //HERE WAS THE PROBLEM. ADDING THIS CALL MAKES IT WORK
//===================================================================
}
catch(WebException wex)
{
connectErrMsg = wex.Message;
proxyworks = false;
}
Still not sure exactly how calling the message box was making everything work, but it doesn't really matter at this point. The whole thing works like a charm.