C# .Net Standard HttpClient Error while copying content to a stream - c#

I am working on a .Net Standard class library and working on creating a library for the new Destiny 2 Api. I have one method working which is the search for a user to get their information. However, when I make a request to a different endpoint I get this error:
System.AggregateException: One or more errors occurred. --->
System.Net.Http.HttpRequestException: Error while copying content to a
stream. ---> System.IO.IOException: The read operation failed, see
inner exception. ---> System.Net.Http.WinHttpException: The operation
has been canceled
I can send this request just fine using PostMan or any other Api testing tool so something must be wrong with my code. Something worth noting, when using other Api tools I notice that I get redirected a few times. Also when using Fiddler I can see that none of the requests actually get Json back, they all look like redirects to another page, Postman shows 3 redirects fiddler shows 2 and then a failure in my code.
My code is pretty small so I cannot think of much that could be breaking it:
public string GetProfile(BungieMembershipType membershipType, string destinyMembershipId)
{
var properUrl = String.Format(GetProfileUrl, (int)membershipType, destinyMembershipId);
var rawData = RootRequest.Web.GetStringAsync(properUrl).Result;
return rawData;
}
The only thing that seems odd to me is I am testing my code inside of Unit Tests, I cannot evaluate RootRequest during the debugging. RootRequest is a static class that has a static HttpClient on it that is used for making all requests to keep authentication to the Api simple.

I would assume that something is actually wrong internally at the Bungie Api servers but multiple things about my code had to change. Firstly, my BaseUri was missing the www.. Secondly, the GetProfileUrl was missing a / before the ?queryString was added.

Related

Is there a way to change the HTML Error Code, if the permitted URL is too long?

We got the following problem:
I am currently developing a web server implementing a specific API. The association behind that API provided specific test cases I'm using to test my implementation.
One of the test cases is:
5.3.2.12 Robustness, large resource ID
This test confirms correct error handling when sending a HTTP request with a very long location ID as URL parameter.
The url its calling looks something like this:
https://localhost:443/api/v2/functions/be13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005ebe13789-1f1e-47d0-8f8a-000000000005
Basically the tests checks, if my server responds with the correct error code if the URL is too long. (At the time of writing it is testing for Errorcode 405, but I already asked them if it shouldn't be 414)
I'm developing the server in Asp.Net 6 and it always returns Bad Request 400 in the testcase.
I don't seem to find a place to change the handling for this behaviour and I am not even sure, if I can, or if the IIS is blocking the request even before it reaches my server. I activated logging in IIS, but the request does not show in the logfile in inetpub/logs/LogFiles.
My question would be, if it is possible to tell IIS to return a different error code in this case, or if it is even possible to handle the error in my application.
What I tried:
Activating IIS Logs to see if the request is even passed to my site. (It did not)
Tried adding Filters to my Controller to see if I can catch an Exception
Checked, if Development Error Sites are called.
Breakpoints in existing middlewares are not reached.
EDIT:
I am now pretty sure now, that the request never reaches my application.
It is possible to reproduce the error by using the default site the IIS generates on windows. Just copy the whole path from above into a browser with the host http://localhost will also just produce the error 400
EDIT 2:
As #YurongDai pointed out, I tried activating failed request tracing for my IIS Site. I used the default path \logs\FailedReqLogFiles.
The folder was created, but no file is written, when I'm opening the URL above in my browser.
IIS Error 400 occurs when the server is unable to process a request sent to a web server. The most common cause of Bad Request error 400 is an invalid URL, but it can happen for other reasons as well. To resolve IIS Error 400, first make sure that you have entered the URL correctly, typos or disallowed characters in the URL are the most common causes of Bad Request errors. If the error persists after verifying the URL, please clear your browser's cache, DNS cache, and cookies and try again.
Clear your browser's cookies.
Clear your browser's cache.
Clear your DNS cache.(Execute the following command in the command prompt window: ipconfig /flushdns)

New Server Security Causing Issues To API Response

one of my old project/app was working fine for years, very recently client report that app does not working any longer due to API response issue.
it's just a get request to an API with some parameters..
till date (before issues occurs) it returns following response:
,,3,1669179307,0,
but recently it shows following response: (note nothing is changed in the source php/code files since project start)
<html><title>You are being redirected...</title>
<noscript>Javascript is required. Please enable javascript before you are allowed to see this page.</noscript>
<script>var s={},u,c,U,r,i,l=0,a,e=eval,w=String.fromCharCode,sucuri_cloudproxy_js='',S='bT0nP2RUNCcuc3Vic3RyKDMsIDEpICsgJycgKyAKJz9iVGYnLnN1YnN0cigzLCAxKSArICcnICsgCidIcExjJy5zdWJzdHIoMywgMSkgK1N0cmluZy5mcm9tQ2hhckNvZGUoNTYpICsgJ3FAYycuY2hhckF0KDIpKyAnJyArIAonNycgKyAgICcnICsgClN0cmluZy5mcm9tQ2hhckNvZGUoMHg2MykgKyAgJycgKycnKyIyc3VjdXIiLmNoYXJBdCgwKSsiYyIgKyAiNHNlYyIuc3Vic3RyKDAsMSkgKyAiZW0iLmNoYXJBdCgwKSArICAnJyArIAoiM2MiLmNoYXJBdCgwKSArICIiICtTdHJpbmcuZnJvbUNoYXJDb2RlKDk3KSArICJlIi5zbGljZSgwLDEpICsgICcnICsnZCcgKyAgJ0RiJy5zbGljZSgxLDIpKyAnJyArJycrJ3hJMScuY2hhckF0KDIpK1N0cmluZy5mcm9tQ2hhckNvZGUoMHgzMSkgKyAncTAwJy5jaGFyQXQoMikrU3RyaW5nLmZyb21DaGFyQ29kZSgweDYzKSArICIiICsnSHVIZScuc3Vic3RyKDMsIDEpICsiN3N1Ii5zbGljZSgwLDEpICsgIjhzdSIuc2xpY2UoMCwxKSArICdjJyArICAiZHN1Y3VyIi5jaGFyQXQoMCkrJ2EnICsgICIiICsiY3N1Y3VyIi5jaGFyQXQoMCkrImRzZWMiLnN1YnN0cigwLDEpICsgU3RyaW5nLmZyb21DaGFyQ29kZSg0OSkgKyAgJycgKyAKU3RyaW5nLmZyb21DaGFyQ29kZSgweDMzKSArICAnJyArJycrJ2QnICsgICAnJyArIAonMScgKyAgJyc7ZG9jdW1lbnQuY29va2llPSdzdXMnLmNoYXJBdCgyKSsndXN1YycuY2hhckF0KDApKyAnYycrJ3UnLmNoYXJBdCgwKSsncnN1Y3VyaScuY2hhckF0KDApICsgJ2knKycnKydzdWN1cmlfJy5jaGFyQXQoNikrJ2MnKycnKydsc3VjdXJpJy5jaGFyQXQoMCkgKyAnb3N1Jy5jaGFyQXQoMCkgKyd1JysnZHMnLmNoYXJBdCgwKSsnc3AnLmNoYXJBdCgxKSsncnN1Y3UnLmNoYXJBdCgwKSAgKydvJysneHN1Y3VyJy5jaGFyQXQoMCkrICd5Jysnc3VjdXJfJy5jaGFyQXQoNSkgKyAnc3V1Jy5jaGFyQXQoMikrJ3UnKydpJysnJysnZHN1Y3VyJy5jaGFyQXQoMCkrICdfc3UnLmNoYXJBdCgwKSArJzQnKycnKydzdWN1cmMnLmNoYXJBdCg1KSArICc2c3VjJy5jaGFyQXQoMCkrICcwc3VjdXInLmNoYXJBdCgwKSsgJ3N1Y3VyaTUnLmNoYXJBdCg2KSsnc3U0Jy5jaGFyQXQoMikrJ3N1Y3VyNCcuY2hhckF0KDUpICsgJ2YnKycyc3VjdXJpJy5jaGFyQXQoMCkgKyAiPSIgKyBtICsgJztwYXRoPS87bWF4LWFnZT04NjQwMCc7IGxvY2F0aW9uLnJlbG9hZCgpOw==';L=S.length;U=0;r='';var A='ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';for(u=0;u<64;u++){s[A.charAt(u)]=u;}for(i=0;i<L;i++){c=s[S.charAt(i)];U=(U<<6)+c;l+=6;while(l>=8){((a=(U>>>(l-=8))&0xff)||(i<(L-2)))&&(r+=w(a));}}e(r);</script></html>
here is curl screenshot:
And here is the postman screenshot:
and when i check the URL in browser it shows the expected result, though when i check the devtool (network tab), it looks like page is loaded two times.. 1st one provide error (HTML/js) response (read marked) 2nd one provide the expected response (green marked), so, it looks like when it's called directly by curl/postman/c#... fails.. but as browser can do redirect it passed.
here is the browser screenshot:
i am sorry, i added several screenshot to give better idea what is happening.
and here is the URL in question:
https://simpleclienttracking.com/membershipmanager/remotelogvisit.php?locID=1&orgID=1&deptID=1&barcode=8346420
now my question, is how can i use the API code/file to get the direct response as i was getting earlier? do i need to pass any header? update/modify server htaccess file or what?
To test the error in deep, i have tried another URL from another hosting provider, in that case i am passing post request to an URL, and this server response slightly different thing, but looks like core is same, redirect!
here is the response from new/another server:
<script>document.cookie = "humans_21909=1"; document.location.reload(true)</script>
so, it's looks like hosting providers has applied some kind of security for direct URL access?
thanks in advance for any upcoming help
best regards

.NET Core 2.1 new HttpClient based on .NET sockets and Span<T>. seems to have a problem

I have a strange issue I'm trying to triage having to do with the new HttpClient on .NET Core 2.1. From this article here (https://blogs.msdn.microsoft.com/dotnet/2018/04/11/announcing-net-core-2-1-preview-2/) I know that the HttpClient has been completely re-written to use a different low level library for handling HTTP requests. I'm wondering if anyone has seen any issues with the new implementation.
What I'm seeing is a strange case where my application (.NET Core 2.1) which sends a POST request to some API periodically (every 10 seconds) a few of times every 15 min it will throw an exception with the error: An error occurred while sending the request.; The server returned an invalid or unrecognized response.
No other details are available, it's just an exception when I make a call like this:
using (var res = await _httpClient.PostAsync(uriBuilder.Uri, new StringContent(serializedRequestBody, Encoding.UTF8, "application/json")))
{
//Do something here
}
The exception caught is a System.Net.Http.HttpRequestException and it has some inner exception with the above error message.
So as I mentioned this does NOT happen all time, it happens seemingly at random, or at least I can not discern any particular pattern. All I can say is these POST requests are made once every 10 seconds 24/7 and anywhere between 5% and 10% of the POST requests fail with the above exception.
So used tcdump and piped it into wireshark to examine the requests to see what's actually happening when the requests fail and what i see is the following:
On a good POST I see: my app sends the request to server, server sends response back, my app sends ACK to server and server responds with FIN,ACK. Done. Good Stuff.
On POST which gets the above exception I see the following: my app sends the request to server, and almost immediately after (like a few milliseconds after) my application sends FIN, ACK to server.
This seems consistent with what I see in my application logs, which show that the request duration is 0 before the exception is thrown.
So what it looks like to me is, my application sends the request and then immediately after closes the connection for some reason. However, I don't understand why this happens. I tried comparing the raw HTTP requests (good POST vs bad POST) to see any differences and I can not see any difference.
One last thing to mention, is that I ONLY see this in applications running on .NET Core 2.1. When I run my application on .NET 2.0 I do not see this problem. Also when I use the same library (where the HTTP call is being made) in the .NET 4.5.1 application (I use multi-targeting to compile the library targeting .net standard and net451) I also do NOT see this problem. So it seems to affect only .NET Core 2.1
Any ideas of where I can go from here? Is there something else I should look for ? How would someone go about trying to triage this type of issue ?
[EDIT] I added a screenshot of the wireshark output which shows the last POST request the server never does not respond before the client sends FIN,ACK
[EDIT]
#Svek Pointed out something in the comments about the sequence of ACKs. I think there maybe something here, because (in the screenshot) after the very last POST there is a FIN, ACK and it shows Ack=7187, so I look back I see the previous FIN,ACK had sequence=7186. Now, I'm by far not an expect in TCP or networking so I maybe saying something completely dumb, but does that mean that the last FIN,ACK (which comes from my host to server) is essentially my host FIN,ACK'ing the previous FIN,ACK (from server to my host) and essentially closing the connection.
So since the next POST is made to the same host:port, using the same connection and yet the connection is closed (via that last FIN,ACK) that's why I never get a response back?

Accessing OneDrive from Desktop App

I'm trying to get to grips with OneDrive, using this tutorial:
https://msdn.microsoft.com/en-us/library/hh826529.aspx
When I run in code, it gets as far as the makeAccessTokenRequest function, sending the following requestURL:
"https: //login.live.com/oauth20_token.srf?client_id=[myclientID] &client_secret=[myclientsecret]&redirect_uri=https:// login.live.com/oauth20_desktop.srf&grant_type=authorization_code&code=[authcode]"
(please ignore the spaces after "https:", I had to add them here to allow the question)
[myclientid], [myclientsecret], and [authcode] all appear to be populated correctly. It seems to get a response, as it runs the function "accessToken_DownloadStringCompleted", but throws a "TargetInvocationException" error, The inner message of the error is ""The remote server returned an error: (400) Bad Request.".
Could anyone throw any light on this? I'm completely new to this, so apologies if my question makes no sense, or is irritatingly vague..
Requests to the oauth20_token.srf end point need to be a POST with the parameters in the body of the post, instead of the query string. Since you didn't mention what code you're using to build the HTTP request it's hard to provide an example, but take a look at RedeemAuthorizationCodeAsync in my sample OAuth 2 project for an idea.
The outgoing request should look like this:
POST https://login.live.com/oauth20_token.srf
Content-Type: application/x-www-form-urlencoded
client_id={client_id}&redirect_uri={redirect_uri}&client_secret={client_secret}&code={code}&grant_type=authorization_code
You may also find this tutorial easier to follow than the one you linked with: https://dev.onedrive.com/auth/msa_oauth.htm.
If you are doing something with OneDrive (you tagged the post OneDrive) then you may want to consider using the OneDrive SDK instead. It includes authentication for several types of .NET projects so you don't need to figure out how to do auth yourself.

Is My site down, not working or has an error?

I want to create a small windows application will go automatically every time period to my site and check if its running fine, if it found it down, not working or have an error "Examples: 404, network error, connection to db failed" it will show a message on my screen.
How can i know that there is an error there programmaticly using any .NET language?
It's pretty easy to do with a WebClient. It would look something like this:
WebClient client = new WebClient();
try
{
string response =
client.DownloadString("http://www.example.com/tester.cgi");
// We at least got the file back from the server
// You could optionally look at the contents of the file
// for additional error indicators
if (response.Contains("ERROR: Something"))
{
// Handle
}
}
catch (WebException ex)
{
// We couldn't get the file.
// ... handle, depending on the ex
//
// For example, by looking at ex.Status:
switch (ex.Status)
{
case WebExceptionStatus.NameResolutionFailure:
// ...
break;
// ...
}
}
You could hook that up to a Timer's Tick event or something to periodically make the check.
Why bother? You can get a much better solution for cheap from a provider like RedAlert
The nice thing about this is:
1) It tests your site from outside your firewall, so it can detect a wider variety of problems.
2) It is an impartial 3rd party so you can prove uptime if you need to for an SLA.
3) For a small premium you can have the thing try and diagnose the problems.
4) It can page or e-mail you when there is a problem.
5) You don't need to commission a new server.
Geez, I sound like an ad for the guys, but I promise I don't work for them or get a kickback. I have just been happy with the service for our servers.
BTW: I checked pricing and it is about $20 per site/month. So you could probably pay for a year of the service in less time than it will take to build it yourself.
Wanting to perform the same functionality I first looked into third party solutions. One particular service that is free and has been fairly accurate is MonitorUs.
If, however, you are wanting to build your own then I would have one recommendation. Consider using a Head request instead of a get request:
The HEAD method is identical to GET
except that the server MUST NOT return
a message-body in the response. The
metainformation contained in the HTTP
headers in response to a HEAD request
SHOULD be identical to the information
sent in response to a GET request.
This method can be used for obtaining
metainformation about the entity
implied by the request without
transferring the entity-body itself.
This method is often used for testing
hypertext links for validity,
accessibility, and recent
modification. w3.org
Here's a link to Peter Bromberg's article that explains how to perform a Head request in C#.
Use the System.Net.WebClient object. It's easier to use than HttpWebRequest. It has a "DownloadString" method that will download the contents of a URL into a string. That method may also throw a WebException error if the server returns a 500. For other errors you can parse the string and look for key words.
Use HttpWebRequest, and wrap it in a try catch for WebException. The error code in the exception object will give you the code. 404, etc. If it is 500, you could print the message.
If you do this, create a special page that exercises any special subsystems, like the data base, file IO, etc, and serves up the results in plain text, not html. This will allow you to parse the returned data easier, and will also catch things like DB or IO problems that may not give you a 404 or 500 HTTP error.
Try Adventnet's Application Manager (http://www.manageengine.com/products/applications_manager/), it is free for 5 monitors, and provides excellent monitoring capabilities
You could configure the actions that can be done in case of a failure like send email etc.
If you'd prefer to get email/SMS when your sites are down, try the Are My Sites Up web-based solution.

Categories