Digest authentication not working on IE8, Firefox and Chrome are fine - c#

I have a website with digest authentication required and when I browse it with IE8, it gives me 401 even the password is correct. Firefox and Chrome works correctly. I checked the authorization headers with Fiddler, everything seems fine. Can you give me any hints on the problem?
p.s. Additionally I do have the same problem with implementing digest authentication in C#, I don't know these two are related.

I was facing this problem and this was the only mention of it on the net. In Digest Access Authentication the sequence of events that take place is.
GET on /url
401 with a WWW-Authenticate header
This pops up the login dialog on your browser. After you enter your credentials.
GET on /url along with the Authorization header.
200 OK (If everything goes well).
This works fine for Firefox and Chrome but was not working fully for IE8.
By fully I mean, that if I did a GET on a virtual location on the server it worked, but it did not work when I did a GET on a static file. In the case for a static file I was prompted for a login again and again.
After using a sniffer I found out that in the case of requesting a virtual location the sequence of events happened as mentioned above, but when I requested a static file the sequence was as follows:
GET on /url
401 with a WWW-Authenticate header
This pops up the login dialog on your browser. After you enter your credentials.
GET on /url (WITHOUT THE Authorization header)
401 Un-Authorized.
Basically when it was a static file, it took the username and password but never sent it across in the Authorization header. Server not getting this header responded with 401 which again prompted the login.
To make IE8 work properly you have to fool it in thinking that this is not a static file, but is a virtual location. For me, it was easy as I had access the server's source code. I really don't know how to do it, if you don't have access to it.
If you have requested a virtual location.
1. GET /virtual_location
2. 401 with WWW-Authenticate header which will look something like
WWW-Authenticate: Digest realm="validusers#robapi.abb", domain="127.0.0.1:80", qop="auth", nonce="9001cd8a528157344c6373810637d030", opaque="", algorithm="MD5", stale="FALSE"
Notice the opapue parameter is an empty string.
On the other hand if you requested a static-file
1. GET /staticfile.txt
2. 401 with WWW-Authenticate header which will look something like
WWW-Authenticate: Digest realm="validusers#robapi.abb", domain="127.0.0.1:80", qop="auth", nonce="81bd1ca10ed6314570b7362484f0fd31", opaque="0-1c5-4f7f4c1e", algorithm="MD5", stale="FALSE"
Here the opaque parameter is a non empty string.
Hence, if you an ensure that the opaque parameter is always an empty string, IE8 will consider it as a virtual location and the request will go through normally. Since I had access to the the server's code I was able to do this.
Hope this is of any help.
Regards,
Satya Sidhu

I had the same problem. In my case, I was requiring digest authentication for my entire site, using directives in either "<Directory />" or "<Location />". Either way works for Firefox and Safari on Mac, PC, and iOS. Unfortunately, IE8 seems to have trouble with this. After trying several other changes, I finally found that if I only require authentication on a subdirectory (e.g. "<Location /private>"), and move my content into the protected directory, IE8 started working. I went back and forth a few times, changing only this attribute, to confirm that this is the critical difference.
Incidentally, it's worth noting that a tcpdump showed that IE8 wasn't even trying to send a digest authentication. It presented the auth dialog box, took my username and password, then sent a normal GET request with no authentication info.
Are (were) you protecting the entire content tree?
I'm not sure why IE8 (and only IE8) cares about this distinction, but this is what I found.
In searching for a solution to the problem, yours was the only mention that seemed relevant, and I could find no answer posted on the net. This leads me to believe that either no one tries to configure Digest authentication in this way, or most people just give up and use Firefox (or some other non-MS browser)

Wow, I'm definitely having the same problem. I have two virtual hosts, both using digest authentication. On one site I am trying to protect the entire site (i.e. ) and it works in all browsers I have tried except IE8. On the other site, I'm only protecting a subdirectory, and that works fine in IE8.

I had the same problem and tried to use the digest authentication for the whole vhost. But the following configuration did not work on IE.
<Location />
AuthType Digest
AuthName "Login"
AuthDigestDomain /
AuthUserFile /path/to/.htdigest
Require valid-user
</Location>
The workaround in http://lists.centos.org/pipermail/centos/2013-January/131225.html worked well:
ErrorDocument 401 "some random text"
A better solution is to exclude the apache error pages that are normally located at /error/.*
e.g.
Alias /error/ "/usr/share/apache2/error/"
The following configuration worked well for me (see also https://bz.apache.org/bugzilla/show_bug.cgi?id=10932#c5):
<LocationMatch "^/(?!error/)">
AuthType Digest
AuthName "Login"
AuthDigestDomain /
AuthUserFile /path/to/.htdigest
Require valid-user
</LocationMatch>

Related

ASP NET Web API google authentication issue HTTP 404

I am trying to setup a social login for my site.
Here is what I did:
I created credentials on google and have both ClientID and Secret
In default MVC app, in App_Start Startup.Auth.cs I uncommented
app.UseGoogleAuthentication()* method, so it looks like this:
Build solution!
Made sure authorized JavaScript origins and Redirect url are correct. And other things that are needed on console.cloud.google.com are done. Including activation of Google+ API
Eventually Google authentication button should appear in _ExternalLoginsListPartial partial view. But as I can see I have 0 login providers still. And not sure why, and what can I do about it?
var loginProviders = Context.GetOwinContext().Authentication.GetExternalAuthenticationTypes();
//loginProviders.Count() here returns 0
Tried researching, but most are saying that you forgot to build, or restart the server. Tried that but nothing changed.
As last resort, I tried following a tutorial https://youtu.be/WsRyvWvo4EI?t=9m47s
I did everything as shown there, I should be able to reach api/Account/ExternalLogins?returnUrl=%2F&generateState=true url, and receive callback URL from Google.
But I got stuck with same HTTP404 error at 9:50
To answer my question, everything turns out to be fine.
All I had to do was just to give it some time.
After couple of hours, Google provider appeared on the page.
For future readers - if met with 404 in this case, another possibility is an active filtering rule against query strings in IIS. One of the commonly copy-pasted rules aiming to block SQL injection requests scans the query string for open (to catch OPEN cursor). Your OAuth request probably contains this word in the scopes section (data you want to pull from the Google profile)
IIS -> Request Filtering
Switch to the tab "Rules"
Inspect and remove any suspicious active filters there

C# .Net MVC and CA/Siteminder middleware

History: I have a tiny app that has lived on a linux web server for a while: html5/javascript/perl cgi scripts. There is a sort of third party middle ware called Siteminder from CA that provides SSO services and it works fine. In my case on the linux box there is a dir in the DOCROOT that holds the Public facing html, js & perl cgi scripts. There is a different dir where the pages and scriots for the authorized content sits. Siteminder is configured to be aware of this auth-dir and the request paths that contain that auth-dir element.
Siteminder is tied into Apache and observes the request stream and when it sees a request with a path element that it cares about it holds the in-bound request; redirects the visitor to a branded auth page; deals with the auth flow and then, if authenticated, sends the original request on through. In this case the auth is tied to an AD group. Again, this works. My pages and code are totally unaware of the existence of Siteminder.
For reasons above my paygrade it has been decided to move the content from the linux box to an IIS server. Convert everything to C# .Net MVC. I am NOT a windows person but this is what is in my plate at the moment.
Our local Siteminder experts tell me that SM works exactly the same under IIS as linux. That once I convert my code that it doesn't need to be aware of SM either... yet something is not working.
In my case, due to user interaction a modal popup appears in the Public section (HomeController) that holds a small form. Clicking the submit button triggers a jQuery GET (I've also tried PUT, POST and a redirect) action to a method in the AuthController, a la:
$.get({
'url': "/Auth/AddNewData",
'contentType': "application/x-www-form-urlencoded; charset=UTF-8",
'dataType': "json",
'traditional': true,
'data': {
'thing': myThing,
'otherThing': myOtherThing
}
}).done(function(data, textStatus, jqXhr) {
console.log("it worked");
}).fail(function(jqXhr, textStatus, errorThrown) {
console.dir(jqXhr);
console.log(textStatus);
console.log(errorThrown);
});
I am aware that there are .Net ways of stating the target url, please bear with me.
What I expect to happen is that if the visitor does not have the auth session cookie that Siteminder sets then they should be redirected to the SM auth flow and once authorized have this request complete.
Instead, what happens is that:
I use the get method: it fires and I get a 302 "Object Moved" response.
if I use the post method: it fires and I get a 200 Ok response but the returned payload is a small amount of html from SM saying that if I am not redirected to my destination shortly to press the button included in the form in that html. The jQuery fail promis fires though because it is expecting a JSON result, not html.
if I use put nothing happens.
I comment out my jQuery ajax call and just use a "location" redirect then SM will put up its challenge page; I can log in; and, the triggering request will be "continued" into a loop of length 3: it calls the page and fails with a 302 that seems to send the request back to SM where it is sent back to the target address to get a 302 then back to Sm then back to the target but it generates a 404 message.
I am deep in the weeds here. Advice would be wonderful
Oh, PS: running this in debug mode on my desktop (no SM) works. Running the Release version on the IIS dev server with SM is what fails.
EDIT
More info: after some additional siteminder config I started getting CORS violation messages. I am setting CORS headers now but that changes nothing. Siteminder seems to strip the CORS headers :/
Another thing I have noticed is that if i craft the failing GET request as a javascript location.href=url + "?" + queryStringData redirect everything works. Current jquery is all but depreciating setting async to false so crafting a non-async version is more than I want to tackle at the moment.
The local siteminder folks will file a ticket soon I think.
EDIT 2
I have ended up with a hacky "fix". I can not use standard GET, POST, PUT, etc methods to interact with the MVC methods because Siteminder is in the way. I have added CORS headers and have tried JSONP, none of that works in this case.
I have to use "redirects" instead. Setting location.href = "/usr?thing=foo&bar=baz" in the javascript functions then redirecting to the url as a result of the MVC methods.
This might be a Siteminder config issue. The local Siteminder mavens have submitted a ticket.
Your question still isn't clear what the problem is with each of the bullets you listed. Is the GET behavior what you expect? A 302 is just a redirect, is it the redirect you expect?
For "POST", you are seeing the "post preservation" behavior. Its what SiteMinder does so that if your session has timed out in the middle of filling out a form, you don't lose your work. Post preservation is a configuration parameter in the "Agent Configuration Object" in SiteMinder. It sounds like your SM admins have configured the ACO differently for the IIS server than they did the Linux server.
PUT - nothing happens? You don't get any response at all, the connection just hangs?
Your last bullet, with the redirect loop, this looping typically indicates that your user is logged in (authenticated) but not authorized, which is a SiteMinder policy configuration issue (again it sounds like different policies are being applied to your IIS server than the Linux)
HTH!
-Richard

Keep getting Error: redirect_uri_mismatch using youtube api v3

Hi I hope someone can help me out here.
I have a Web Application (asp.net) on my local machine, I am trying to upload video to YouTube using this sample https://developers.google.com/youtube/v3/code_samples/dotnet#upload_a_video
I have set up client id and secret for Web application in Google console when I try to upload video a browser tab opens to select one of my google accounts and once I sig in I get redirect_uri_mismatch the response details on that page are below:
cookie_policy_enforce=false
scope=https://www.googleapis.com/auth/youtube.upload
response_type=code
access_type=offline
redirect_uri=http://localhost:55556/authorize/
pageId=[some page id removed here for security reasons]
display=page
client_id=[some unique id removed here for security reasons].apps.googleusercontent.com
one interesting thing is that the redirect_uri=http://localhost:55556/authorize/ is completely different from the one set up in Google console and the one in client_secrets.json also each time I get the error page the port number changes.
redurect urls and origins are set as follows in Google console I think I have added all combinations just in case:
Authorized redirect URI
http://localhost/
https://localhost/
http://localhost:50169/AddContent.aspx
https://localhost:50169/AddContent.aspx
http://localhost:50169
Authorized JavaScript origins
http://localhost/
https://localhost/
http://localhost:50169/
https://localhost:50169/
I am not sure why redirect-uri on the error page does not match any of the
Authorized redirect URI I have specified in Google console ? any ideas ?
Also is it possible that everything is set-up correctly in Google console and my code but this error is triggered by something else like maybe I missed some setting on my you tube account ? I did not make any setting changes since I don't think I have to is that correct ?
Ok I belive that direct video upload to the website owner account is no longer supported in YT API v3.0 according to those posts.
Can YouTube Direct Upload to a Common Account for All Users?
How can I get the youtube webcam widget to upload to one account using API?
Shame, I think I will need to host the videos that users upload on my servers.
However the original issue was fixed by adding this URI to the redirect URIs in the developer console
http://localhost/authorize/
Google OAuth 2 authorization - Error: redirect_uri_mismatch
I got it to work by setting the Redirect URIs to exactly this:
http://localhost:50517/signin-google
Note:
- it does not work with a trailing slash
- port number is whatever your visual studio is assigning
- I set JavaScript Origins to:
http://localhost:50517/
With you, though, would be nice if someone actually documented this somewhere...
You should look into your code where you create the authorization URI. You need pass one of the redirect URIs you registered with Google developer console. I guess you're using some OAuth2 library which uses the localhost:port/authorize as the default redirect URI. The port changes because each time you start your local server, it picks a different port number. To fix it, you should specify a port number when starting it, for example, 8080. Then you should register localhost:8080/AddContent.aspx in Google developer console and pass it to whichever library you use to create the authorization URI.
I experienced a similar problem when trying to setup the quickstart app for the Drive REST API. I kept getting the redirect_uri_mismatch error and the port number with that error kept changing. The fix for me was to change the redirect URI in the Google Developers Console for my app to not include the port number.
There is a really easy way to get round this and I kicked myself when it dawned on me.
I am using "Web Application" credentials - you'll want the credentials manager open btw.
Run the DotNet sample app and let the browser open (I get the "Select An Account" page) - then look in the URL for the redirect URI that's been automatically generated by Google's code something like:
redirect_uri%3Dhttp://localhost:62041/authorize/
Then just go to the credentials manager and add this URL to the allowed list and save. Now select your google account and see what happens - it takes a few minutes for the API to update - if you get the redirect error page just hit back and select you account again - eventually it works and returns back to visual studio.
Once the account has been authorised once it sticks (clear the bin directory to unstick it) but this means you can now put a break point in the code and look at the credentials variable to get the refresh token everyone is so desperately trying to get so that you can persist account connections.

Active Directory authentication with c# (encoding issue)

I have a problem authenticating my user against Active Directory. I am trying to authenticate my user via PrincipalContext.
My issue is that when user password contains non-ASCII character validation fails even with the correct credentials. But I have this problem only on my prod environment. It works just fine on UAT and development environments.
How can I resolve this issue? Is there any setting for AD has anything to do with this?
try to change the password encoding to UTF-8
It is a shot in the dark, but hear me out. This is why I asked if this was a web project. I had a similar problem with identical symptoms. A user came to me saying he couldn't login in my website. Turns out he had "special" characters in his password. It didn't make sense, but after disabling custom errors I realized the error was due to ASP.NET Request Validation: http://msdn.microsoft.com/en-us/library/hh882339%28v=vs.100%29.aspx
Request validation is a feature in ASP.NET that examines an HTTP
request and determines whether it contains potentially dangerous
content. In this context, potentially dangerous content is any HTML
markup or JavaScript code in the body, header, query string, or
cookies of the request. ASP.NET performs this check because markup or
code found in the URL query string, cookies, or posted form values
might have been added to the request for malicious purposes.
Pretty much everything that looked like a tag got flagged and runtime threw an exception way before my code had a chance to validate user's password.
Hope this helps!

How to get started using DotNetOpenAuth

I created a simple page using the code provided by this page (the first sample):
http://www.dotnetopenauth.net/developers/code-snippets/programmatic-openid-relying-party/
But I can't seem to get it to work, I can redirect to the provider but when the provider redirects back to my page, I get error 500, "The request was rejected by the HTTP filter".
I already checked ISAPI filters which I have none.
I've never seen that error before. Is this page hosted by the Visual Studio Personal Web Server (Casini) or IIS? I suspect you have an HTTP filter installed in IIS (or perhaps your web.config file) that is rejecting the incoming message for some reason.
Note that you need to turn off ASP.NET's default page request validation on any page that can receive an OpenID authentication response because those responses can include character sequences that look like HTML/Javascript-injection attacks but in fact is harmless.
I discovered that I'm using Isa in the server, so I just followed this instructions to get it working.
http://blog.brianfarnhill.com/2009/02/19/sharepoint-gets-the-error-the-request-was-rejected-by-the-http-filter/

Categories