NTLM over HTTP: Any C# client implementation? - c#

I need to programmatically download a file from a SharePoint server.
When I download the file with Firefox it looks like a single request, but Httpfox shows that the HTTPS conversation is actually 4 requests:
REQ1: GET https://mycorp.raxsp.com/_windows/default.aspx?ReturnUrl=/personal/mycorp_user1/_vti_bin/cmis/rest?getRepositories
RESP1: 401 Unauthorized, WWW-Authenticate NTLM
REQ2: Authorization NTLM TlRMTVNTUAABAAAAB4IIAAAAAAAAAAAAAAAAAAAAAAA=
RESP2: 401, WWW-Authenticate NTLM TlRMTVNTUAACAAAACgAKADgAAAAFgokC+[...]
REQ3: Authorization NTLM TlRMTVNTUAADAAAAGAAYAIAAAAAYA[...]
RESP3: 302 Found, Set-Cookie FedAuth=77u/PD94bW[...], Location /personal/mycorp_user1/_vti_bin/cmis/rest?getRepositories
REQ4: GET /personal/mycorp_user1/_vti_bin/cmis/rest?getRepositories
RESP4: 200 OK, <download begins>
I tried downloading the file with a simple HttpWebRequest with user/password, but as expected I just get the error 401. I am considering implementing the whole 4 requests, computing challenges with the NTLM over HTTP authentication algorithm (spec), but that sounds very error-prone...
Is there a client-side library or a code snippet that does NTLM over HTTP authentication?
It is for an Open Source project, so must be Open Source, and preferably using HttpWebRequest.
No Kerberos/SSO/domains involved.

We download files from SharePoint all the time with this code using System.Net.WebClient
public static byte [] downloadSharepointFile (string url){
using (var client = new WebClient { Credentials = new NetworkCredential("username", "password", "domain") })
{
client.Headers.Add("Accept: application/json");
return client.DownloadData(url);
}
}

Related

C# HttpClient authorization header removed after send to server

I wanna send request to external API through the HttpClient from my own API in c#.
I am using dot net 5 and basic authentication.
Here is my code:
var client = new HttpClient
{
BaseAddress = new Uri(baseUrl)
};
HttpRequestMessage requestMessage = new HttpRequestMessage(HttpMethod.Put, "apiUrl");
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var param = JsonConvert.SerializeObject(new
{
param1="",
param2=""
});
requestMessage.Content = new StringContent(param, Encoding.UTF8, "application/json");
requestMessage.Headers.Authorization = new AuthenticationHeaderValue("Basic",
Convert.ToBase64String(Encoding.ASCII.GetBytes($"{user}:{pass}")));
HttpResponseMessage response = await client.SendAsync(requestMessage);
Usually, I send http request like this.
but now I have some problem.
After line HttpResponseMessage response = await client.SendAsync(requestMessage); authorization header removed from my request header and I got UnAuthorized http error.
I know that when redirection occurs, authorization header removed for security reason. Also I'm not sure about redirect in external API.
I add the HttpClientHandler with AllowAutoRedirect = false to my HttpClient
var handler = new HttpClientHandler()
{
AllowAutoRedirect = false,
};
var client = new HttpClient (handler)
{
BaseAddress = new Uri(baseUrl)
};
Now I got Redirect error 301(Permanently moved).
I decided to test the code in Postman. By default, when I call API in Postman, I got http error 405 method not allowed and error detail like this:
{
"detail": "Method "GET" not allowed."}
External API method is PUT. but here I got GET Error.
I tried many options in postman and finally I find the option in Postman:
When I toggle it on, external API work's properly.
Also I test it with Insomnia and it's work properly.
Does it related to my code or dot net 5 or what other thing in my code or it's related to external API?
If it's related to my code, How can I solve the error?
If error related to external API, why Postman and Insomnia response is ok?
External API has Core Policy for specific domain and I send request from other domain.
All I Know is that the CORS policy applied in browser. not in Postman, Insomnia or C# Code.
What about CORS? Does it related to CORS? if Yes, what shall I do?
I will be grateful for your answer.
Update
I detect WWW-Authenticate: JWT realm="api" in the response header.
What exactly is it? and what shall I do?
I find out the problem. It's really Ridiculous.
When I use URL like www.abc.com/api/something the request gets 301 error and postman sends another request like www.abc.com/api/something/. The difference is just / at the end of request.
I tried new URL in postman and first request got ok.
Also I tried URL in my C# code and again its ok.
But i could not understand why.
Thanks a lot dear #pharaz-fadaei
You are right about the removal of authorization headers after redirects. But keep in mind that this behavior is part of the design of the HttpClient in C#. Postman and Insomnia may have different mechanisms to send the authorization headers on each consecutive request that is caused by redirects. The option that you enabled in Postman will make it use the original HTTP method you specified (PUT) as the HTTP method to send further requests based on the redirect messages (Postman uses GET method by default on requests instructed by redirect messages).
The fact that you see a 301 shows that a redirection is required. You can check Location header value in response.Headers to see the real location and send your requests with the authorization headers to that endpoint directly. If I were you I wouldn't use the new location directly because the original endpoint is what you were given by the authors of the API. Instead I would programmatically send the request to the original endpoint and resend the request on 301 codes to the new Location (use PUT method due to the behavior of Postman) until you get the result. This answer can give you some ideas: https://stackoverflow.com/a/42566541/1539231
Since you see WWW-Authenticate: JWT realm="api" header, the external API is required a JWT token authentication, not basic authentication. I think first you might need to check external api's documentation.

Request Rest API with Http failed with 401 but Https works

Below codes worked when the Uri = "https://someRestApi", but reports 401 Unauthorized when Uri = "http://SomeRestApi", why is it happening? The server is hosted in Azure App Service, with
TLS/SSL settings . Sending the http request with Postman does work.
public static void Main(string[] args)
{
var uri = new Uri("http://someTestApiUri.com");
var token = GetTestToken();
HttpWebRequest httpRequest = HttpWebRequest.Create(uri) as HttpWebRequest;
httpRequest.Headers["Authorization"] = "Bearer " + token.AccessToken;
using (HttpWebResponse response = httpRequest.GetResponse() as HttpWebResponse)
{
//Some logic here...
}
}
This maybe because the developer of the API may have specified that endpoint allows only https either through the CORS and origin specification or Attribute Filter. by decorating the ApiController to allow only https.
kindly read some of the articles below.
https://jonathancrozier.com/blog/how-to-configure-your-asp-net-web-apps-and-apis-to-require-https
https://www.c-sharpcorner.com/article/how-to-enable-https-in-asp-net-web-api/
there are many articles that will give you a clue but these are just few so it is not exhaustive.
so if the backend or whoever developed the endpoint specifies that only https endpoint should be hit, when you attempt to hit the http he may return the message or http statusCode 401 . again read more about http status code and what they mean.
Also, it may work in postman because you disabled a security settings on your postman. please try and use a secure channel when making API calls.
Thanks.
Regards
Authentication and authorization in Azure App Service and Azure Functions in the section about disabling requireHttps clearly states:
However, we do recommend sticking with HTTPS, and you should ensure no security tokens ever get transmitted over non-secure HTTP connections..

How to send POST request to SOAP API using HttpClient with Kerberos authentication?

I have some service, which is SOAP API placed on Windows Server machine protected by Kerberos authentication. I would like to consume this API using .NET Core.
For now I was using some WCF client with basic authentication. So I was using some interface obtained from wsdl and after providing some credentials it was fine. Now I have to switch to Kerberos.
On the server SPN's are configured and I got a keytab file with principals.
As far as I know it is not such a simple case, so I have prepared some shared library in C++ to obtain the negotiate header value (it uses principals from proper keytab). I ve included this library in .net core project using DllImport and everything works fine. I m able to connect to the API, get the negotiate header, as well as obtain the ticket using kinit
kinit -V HTTP/test.test.com#test.com works fine.
header
When I already have the Negotiate header value, I would like to make simple POST request (using httpClient) with proper header value (ticket), but always I got 401 Unaithorized.
using (var httpClient = new HttpClient())
{
httpClient.BaseAddress = new Uri("http://example.com/");
httpClient.DefaultRequestHeaders.Add("Accept", "some-type");
httpClient.DefaultRequestHeaders.Add("Content-Type", "some-type");
httpClient.DefaultRequestHeaders.Add("Authorization", "Negotiate XYZAA....."
// ...
}
As a body I send simple SOAP envelope which is fine.
Should I store something more just after obtaining a ticket, or it is just enough to add this ticket to each outgoing request as shown above?

Check url with NTLM on server side

I need to check if url exists and can be reached. In order to do it I send Get request and handle the status:
var httpClient = new HttpClient();
var response = httpClient.GetAsync(new Uri(pageUrl));
isPageAccessible = response.Result.StatusCode == HttpStatusCode.OK;
However, the server uses NTLM for authentication. As I found it here, there are several steps (requests) before I get success OK status. For the first request I get 401 Unauthorized status and can't go to further steps.
All in all, how can I check url on the server with NTML upon completion of all requests?
If you are accessing an authenticated server, you should provide credentials. Credentials of running process for NTLM can be provided with HttpClient as below:
var handler = new HttpClientHandler {
Credentials = System.Net.CredentialCache.DefaultCredentials
};
var httpClient = new HttpClient(handler);
var response = httpClient.GetAsync(new Uri(pageUrl));
You're setting yourself up for failure since there are dozens of reasons why a request may not return a 200 OK response. One may be that the response has no content 204 No Content. Another may be that the request only accepts POST or PUT requests. Another, as you've discovered, may be that it has an authentication system in front of it 401 Not Authorized. Another may be just that the response is a redirect 301 Moved Permanently or 302 Found. Or it could be behind a proxy 305, 306, etc.
The only way you can determine if a URL really exists is to request that the other end prove it. Google does it, Facebook does it, Pinterest does it, etc. The way they do it is they ask the sender to set an MX record in their DNS or a meta tag on their index.html with a custom token they generate. If the token exists, then they're who they say they are.
Anything else is unreliable.

HTTP 407 (Proxy Authentication Required)

I have some basic questions on HTTP authentication
1) How does client know about Server's authentication type (Basic/Digest/NTLM) of HTTP ? is this configurable at HTTP Server side?
My Answer: Server will be set with which authentication type it has to perform with the client.
So our client API(in C# HttpWebRequest API) will automatically take care of it.
Best use of Wireshrk with applying HTTP filter; you will get source and Destination IP at Internet Protocol layer. And src and dest port at Transmission control protocol and athentication type at http layer.
2) If i place squid linux proxy in between client and Server; is there any need from my client code should know about authentication type of proxy also? or authentication type is only related to the end HTTP server?
My Answer: If squid proxy is placed in between Client and Server; it wont use HTTP authentication. It may use a) DB: Uses a SQL database b) LDAP: Uses the Lightweight Directory Access Protocol. c) RADIUS: Uses a RADIUS server for login validation. and etc..
So we have to mention proxy authentication credentials in HTTP Headers.
3) Using WireShark found that there are three request from Browser to Server to fulfill single request.
a) Browser sends a request without any authentication credentials; So server responded with 401 along with relam and nonce.
WWW-Authenticate: Digest realm="realm", qop="auth", nonce="MTM1OTYyMzkyNDU4MzpiOWM0OWY0NmMzMzZlMThkMDJhMzRhYmU5NjgwNjkxYQ=="\r\n <BR>
b) The second time Browser sends request with credentials, relam, nonce, cnonce; but still server responded with 401;
WWW-Authenticate: Digest realm="realm", qop="auth", nonce="MTM1OTYyMzk0OTAyMTo3Njk3MDNhZTllZDQyYzQ5MGUxYzI5MWY2MGU5ZDg0Yw==", stale="true"\r\n
c) The third time Browser send the same request with same credentials, relam, nonce, cnonce. This time Server sends 200 ok.
My Question is in the second and third time Browser send the same request; why Server failed at second time and success at the third time. Is this because of my server implementation ? (I am having REST Java server with SPRING Security filter).
I have C# HTTP client;
where in the first time HttpWebRequest is sent without credentials though the System.Net.NetworkCredentials is set; so clinet got 407 with relam, and nonce. The second time HttpWebRequest is scuccess. There is no third request from this client like browser.
Why this difference between the browser and C# client?
My Answer: I am still don't know what is happening here: Q3.
4) Now the real issue i am facing is When SQUID LINUX PROXY came in between our Client and HTTP Server, Browser did the same three request authentication and succeeded. However C# HttpWebRequest is failed (401) at the second request and reached the cache(exception){} block and not tried for the third time.
Could you please anyone clarify me how to resolve this issue in C# client when PROXY SERVER is in between?
Below code is doing the GET Request.
HttpWebRequest request = WebRequest.Create("url") as HttpWebRequest;
request.Credentials = new NetworkCredential(loginUserName, password);
WebResponse response = request.GetResponse();
Note that our request to proxy is send via TCP protocol not with HTTP protocol. Then from PROXY to SERVER is communicated with HTTP protocol.
But HTTP request from Proxy has the info about our client ip in the HTTP header X-Forwarded-For.
Below are the possible solutions
These solutions only required if your proxy requires any authentication else ignore it.
Solution 1: working for me
request.Proxy.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials;
Solution 2:
IWebProxy proxy = WebRequest.GetSystemWebProxy();
proxy.Credentials = new NetworkCredential(UserName, UserPassword, UserDomain);
request.Proxy = proxy;
Solution 3 given by Martin:
var proxy = new WebProxy ("http://localhost:3128/");
proxy.Credentials = new NetworkCredential (UserName, UserPassword, UserDomain);
request.Proxy = proxy;
Learn more about proxy authentication at
http://wiki.squid-cache.org/Features/Authentication
It's a challenge/response protocol. Usually, the client makes an initial request without any authentication headers (you may set request.PreAuthenticate = true to send the credentials with the first request).
Then, the server responds with a list of authentication methods that it supports.
If the user did not explicitly specify an authentication method using CredentialsCache, the runtime will try them all, from strongest to weakest. Some protocols (NTLM, for instance) require multiple requests from client to server. In theory, Digest should work with a single one, no idea why it's sending the request twice.
Regarding your proxy question, there are two different kinds of authentication:
The target web server's authentication, a proxy simply passes this though and you don't need any special code in your client.
In addition to that, the proxy itself may also require authentication - which may be different from the one of target web server.
You specify these using
var proxy = new WebProxy ("http://localhost:3128/");
proxy.Credentials = new NetworkCredential ("username", "password");
and then
WebRequest.DefaultWebProxy = proxy;
or
request.Proxy = proxy;
Don't set any credentials on the WebProxy if your proxy server doesn't use any authentication.
If you can't get authentication working while using a proxy server, look at the actual requests that are being sent between the three parties (web server, proxy, client) with Wireshark.

Categories