Edit: after talking it over with a couple IT guys, I've realized it's only the POLL requests that are having issues. I'm fetching the images via GET requests that go through quickly and as expected, whether or not the POLL messages are having issues.
I'm working on a client to interface with an IP camera in C#.
It's all working dandy except that I can get really poor http request performance when I'm not using Fiddler (a web traffic inspection proxy).
I'm using an httpclient to send my requests, this is my code that actually initiates the poll request:
public async Task<bool> SetPoll(int whichpreset)
{
string action = "set";
string resource = presetnames[whichpreset];
string value = presetvalues[whichpreset];
int requestlen = 24 + action.Length + resource.Length + value.Length;
var request = new HttpRequestMessage
{
RequestUri = new Uri("http://" + ipadd + "/res.php"),
Method = HttpMethod.Post,
Content = new FormUrlEncodedContent(new[]{
new KeyValuePair<string,string>("action",action),
new KeyValuePair<string,string>("resource",resource),
new KeyValuePair<string,string>("value",value)
}),
Version = new System.Version("1.1"),
};
HttpResponseMessage mess = await client.SendAsync(request);
if (mess.IsSuccessStatusCode)
{
return true;
}
else
{
return false;
}
}
When Fiddler is up, all my http requests go through quickly, and without a hitch (I'm making about 20 post requests upon connecting). Without it, they only go through as expected ~1/5 of the time, and the rest of the time they're never completed, which is a big issue. Additionally, the initial connection request often takes 1+ minutes when not using Fiddler, and consistently only takes a few seconds when I am, so it doesn't seem to be a timing issue of sending requests too soon after connecting.
This leads me to think that the request, as written, is fairly poorly behaved, and perhaps Fiddler's requests behave better. I'm a newbie to HTTP, so I'm not sure exactly why this would be. My questions:
does Fiddler modify HTTP requests (E.G. different headers, etc.)
as they are sent to the server?
even if it doesn't modify the requests, are Fiddler's requests in
some way better behaved than I'd be getting out of .net 4.0 in C# in
VS2013?
is there a way to improve the behavior of my requests to emulate
whatever Fiddler is doing? Ideally while still working within the
stock HTTP namespace, but I'm open to using others if necessary.
I'll happily furnish more code if helpful (though tomorrow).
Inserting
await Task.Delay(50);
between all requests fixed the problem (I haven't yet tested at different delays). Because Fiddler smoothed the problem out, I suspect it's an issue the camera has with requests sent in too quick of a succession, and fiddler sent them at a more tolerable rate. Because it's an async await, there is no noticeable performance impact, other than it taking a little while to get through all ~20 (30 now) requests through on startup, which is not an issue for my app.
Fiddler installs itself as a system proxy. It is possible that the Fiddler process has better access to the network than your application's process.
Fiddler might be configured to bypass your normal system proxy (check the gateway tab under options) and perhaps the normal system proxy has issues.
Fiddler might be running as a different user with a different network profile, e.g. could be using a different user cert store or different proxy settings such as exclusion list.
Fiddler might be configured to override your hosts file and your hosts file may contain errors.
Your machine might be timing out trying to reach the servers necessary to check for certificate revocation. Fiddler has CRL checking disabled by default (check the HTTPS tab).
Fiddler has a ton of options and the above are just some guesses.
My recommendation would be to check and/or toggle the above options to see if any of them apply. If you can't get anywhere, you may have to forget Fiddler exists and troubleshoot your network problems independently, e.g. by using NSLOOKUP, PING, TRACERT, and possibly TELNET to isolate the problem.
There is nothing in your code sample that suggests a code flaw that could cause intermittent network failures of the kind you are describing. In fact it is hard to imagine any code flaw that would cause that sort of behavior.
Related
I am using an Onvif-supported camera and am controlling it using onvif wsdl's (specifically ImagingService, Media10, and PTZ). These work great for controlling most aspects of the camera, but there are a few settings that I want to modify that do not seem to be supported by my camera's version of Onvif.
Onvif works by sending soap requests/packets to the camera. However, when I used Wireshark and observed what happens when I use the software the camera came with, I noticed it works by sending http GET/POST requests containing FormUrlEncoded messages, and one of the fields is labelled session, presumably the session ID for the connection. I was able to recreate the message that the camera's software sends, send it, and confirm that my settings change. However, whenever a new session starts (different ID), it no longer works, and my http response contains:
{"error":{"code":287637505,"message":"Invalid session in request data!"},"result":false}
In other words, hard coding the session Id that I got through wireshark was the only way I could get it to work, but as soon as the session changes it no longer works. I was wondering if there was a way to easily get the session ID, store it as a variable, and then just place it within my messages. I would hope this would be possible either through the HttpClient, or any of the three Onvif clients I am using (MediaClient, ImagingClient, or PTZClient).
I have tried searching through many of the methods/fields in each of the clients above, creating HttpWebRequests, looking through the response messages for GET & POST, looking around where the login credentials are kept, and creating a Cookie with the info.
The way I go about setting everything up is rather simple:
private static readonly HttpClient httpClient = new HttpClient();
var msg = $"{{\"method\":\"configManager.setConfig\",\"params\":{{\"name\":\"VideoInExposure\",\"table\":" +
$"[[{{\"AntiFlicker\":2,\"Compensation\":50,\"DoubleExposure\":0,\"Gain\":50,\"GainMax\":2,\"GainMin\":0," +
$"\"GlareInhibition\":0,\"Iris\":50,\"IrisAuto\":true,\"IrisMax\":30,\"IrisMin\":0,\"Mode\":4,\"RecoveryTime\":900," +
$"\"SlowAutoExposure\":50,\"SlowShutter\":false,\"SlowSpeed\":30,\"Speed\":0,\"Value1\":250,\"Value2\":250," +
$"\"WideDynamicRange\":0,\"WideDynamicRangeMode\":0}}}]],\"options\":[]}},\"id\":6123,\"session\":\"4ec9a21aac415eead620ab163141fa03\"}}";
var response = await httpClient.PostAsync("http://192.168.100.233/RPC2", new StringContent(msg, Encoding.UTF8, "application/x-www-form-urlencoded"));
Where the portion labeled session at the end of the message is the string that I need to obtain.
I was hoping this property would be stored somewhere accessible, either in the client, or within a response if I just do a simple GET command, but I have not yet found this to be the case. Any help is greatly appreciated!
edit: I have also tried getting this value from the soap commands that Onvif already uses, with no luck so far. But if there is a way to do it that way I am very open to it.
I'm working on developing a mobile app that is centered around uploading multiple photos to a web api. I'm using Xamarin.Forms and System.Net.Http.HttpClient, and Clumsy to simulate poor network conditions (lag, dropped packets, out-of-order packets). The app was originally written with Titanium, and worked fine for most users, but some users on poor mobile networks were getting frequent errors. Going forward we are porting to Xamarin and trying to accommodate users with poor connectivity.
using (var httpClient = CreateClient())
{
httpClient.Timeout = TimeSpan.FromMinutes(5);
using (var formData = new MultipartFormDataContent())
{
// add required fields via formData.Add() ...
var httpContent = new ByteArrayContent(imageData);
formData.Add(httpContent, "file", Guid.NewGuid() + ".jpg");
try
{
var response = await httpClient.PostAsync("fileupload", formData).ConfigureAwait(false);
if (response.IsSuccessStatusCode)
{
responseObject = await ResponseMessageToResponseModel(response).ConfigureAwait(false);
}
}
catch (HttpRequestException ex)
{
Debug.WriteLine("HttpRequestException");
}
catch (TaskCanceledException ex)
{
Debug.WriteLine("TaskCanceledException");
}
}
}
What I'm finding is that everything works as expected under normal conditions, however; when enabling Clumsy with "lag, drop, out-of-order" and attempting the upload the PostAsync() never completes and eventually times out with TaskCanceledException. The odd thing is that the file ends up on the server.. so the POST data apparently made it through okay.
I'm guessing that packets dropped in the response from the server means the HttpClient never receives a proper response and continues to wait for one until it times out.
To get to the point, I'm wondering if anyone has any ideas on how to make this process as bullet-proof as possible. Just catching the timeout and trying again doesn't work very well if the file made it through the first time. Any thoughts?
Also, any info on how HttpClient handles dropped/out-of-order packets so I can better understand what's happening would be great as well.
One thing with HttpClient I banged my head at a while ago was special(read uncommon) handling of POST requests.
When sending POST request it first sends the headers (including ContentLenght and special Expect: 100-continue header) to the server but without the body. And waits for server to respond with status code 100 if request is acceptable. After that it starts sending the body.
Additional info here:
MSDN Page for ServicePointManager.Expect100Continue
MSDN blog post with some details
In my case the problem was that this part of the protocol wasn't handled particulary well by the backend service (Play framework) I was talking to when request size was too large for it to handle. It was not returning any error. And request simply timed out. So disabling it with
ServicePointManager.Expect100Continue = false;
well ahead of sending any request to this host solved the problem for me. At least now it was returning something.
If that doesn't help than the best thing I can recommend is looking with wireshark or something similar at what is going on on the wire. Provided it plays nice with this Clumsy tool you're using. (Thanks for the link by the way. Was looking for something like this myself)
I am writing a web application in C#. One piece of functionality is that the server will send out a push notification offering a client the opportunity to do a round of work. The client can accept or refuse this work.
However, if the client takes too long to respond, the server will see this as an implicit refusal and offer the round of work to someone else.
Here is an extract of the controller endpoint on the server, where a client can post it's acceptance of the current round
public HttpResponseMessage PostAcceptRound(PersonData accepter){
Round currentRound = repo.GetCurrentRound();
if(currentRound.offeredTo.id == accepter.id){
repo.RoundAccepted(currentRound.id);
return Request.CreateResponse<String>(HttpStatusCode.OK, "Round has been accepted");
}
else{
//return appropriate response
}
}
My question is: what is the appropriate response for the client taking too long to accept?
My initial reaction was that I should sent a "BadRequest" error response. However, it is not as if a person refusing late is poorly formed request or something that is unexpected. Indeed, it seems as if accepting too late will be a situation that will happen often within the use of this application.
408 = 'Request Timeout' seems to me to most appropriate.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
Issue:
Consider the following working code.
System.Net.WebProxy proxy = new System.Net.WebProxy(proxyServer[i]);
System.Net.HttpWebRequest objRequest = (System.Net.HttpWebRequest)System.Net.WebRequest.Create(https_url);
objRequest.Method = "GET";
objRequest.Proxy = proxy;
First, notice that proxyServer is an array so each request may use a different proxy.
If I comment out the last line thereby removing the use of any proxies, I can monitor requests in Fiddler just fine, but once I reinstate the line and start using them, Fiddler stops logging outbound requests from my app.
Question:
Do I need to configure something in Fiddler to see the requests or is there a change in .Net I can make?
Notes:
.Net 4.0
requests are sometimes https, but i don't think this is directly relevant to issue
all requests are outbound (not localhost/127.0.0.1)
Fiddler is a proxy itself. By assigning a different proxy to your request.. you're essentially taking Fiddler out of the equation.
If you're looking to capture traffic and use your own proxy.. you can't use a proxy (by definition that makes no sense).. you want a network analyzer, such as WireShark. This captures the traffic instead of having the traffic routed through it (as a proxy does), allowing you to have it monitor traffic and route your requests through your custom proxy.
my App uses the .NET 4.5 HTTPClient sending Keep Alive header over this:
Client.DefaultRequestHeaders.Add("Keep-Alive", "true");
By far, the HttpClient just worked and the speed was ok, but I recently discovered in a test program(it sends as much request as possible over multiple threads to a https server and outputs the requests per second rate to test performance)that its around 3 times faster when fiddler is running, even without the reuse connection option(no difference).I researched about this topic, but there were only hints pointing to the keep-alive header&reuse connection option, so my question is: Whats the point fiddler speed ups the app and what I`ll have to change in my code to make the requests faster.
Any help will be greatly appreciated.
(pls add a comment if there are more informations needed)
OK I just got the error after looking up to the similiar webclient: so if you have probs like me, just add a ServicePointManager.DefaultConnectionLimit = 300; // or sth before you re doing request in your code.
WebClient is very slow