Access the cross domain single sign-on websites using .net - c#

Let's say I have two websites that live on separate domains, and they have been authenticated with SSO login. I log into the first website and authenticate, and now I decide to visit the second website using a link visible in 1st website. This link redirect to the second website and no credentials needed. (Note that these websites are developed and hosted by some other company and I don't have access to the source code. I have one login credentials and I am expecting to access the page data of second website)
I want to access the websites using .net code. I have done some work and gone up to accessing the link of first website using HTTP requests and cookies which redirects to the second website. But soon after I clicked the link it open domain 2 in new browser tab and generates new session id as well. It doesn't have any connection with first website cookie. But I can access the data of website 2 when I pass the URL and current session id of website 2 manually.
Is there any security mechanism behind SSO login which can be break through to access the common session Id for both website 1 and website 2?How can I find the connection between two domains when they have two session Ids. I can't post the codes here as it have some privacy information of my client.

I solved the problem. There was SAML request and relay state which were sent to login server from both domains. Even though they are not identical, we have to consider them when access the form data. Also with cookies I had to send pls_login_cookietime specifically from the very first request to last request.
Ex:
var cookie =
new
{
pls_login_cookieTime = Cookietime.CookieValue("pls_login_cookieTime"),
pls_login_SimpleSAMLSessionID = HttpUtility.UrlEncode(login.CookieValue("pls_login_SimpleSAMLSessionID")),
TimeOutCheckID = login.CookieValue("TimeOutCheckID"),
pls_login_SimpleSAMLAuthToken = login.CookieValue("pls_login_SimpleSAMLAuthToken"),
pls_login_rememberme =login.CookieValue("pls_login_rememberme")
Here Cookietime and login are two different url accessing where CookieTime is the initial url and login includes the login server url. Even though login doesn't provide response cookie as pls_login_cookieTime, I had to pass it for final url as request headers.
They were very little confusions. If you use tools like fiddler to check http requests you can understand most of the part how they have manged cookies in multiple domains. Rest is in your hand. You have to think little bit hard and differently. But overall it is not difficult. It is just confused only.

Related

ASP.NET processing unauthenticated file request in site with Windows Authentication: How?

I have a curious problem with a legacy ASP.NET web application using Windows Authentication. A particular page is crashing, and an inspection of the page and the site logs indicates the page is crashing because the request is not properly authenticated - no Windows identity is being requested by IIS or supplied by IE 11.
The page has a curious path; it took a few minutes to decode how it was originally assembled. The initial request is not for a specific page, but is merely a folder-only URL that is routed to Default.aspx. The handler checks the query string and redirects to specific pages accordingly.
The initial request to the site is authenticated, as evidenced by the IIS site logs. The page to which the request is redirected (Response.Redirect) does not authenticate. The absence of the Windows authentication challenge leaves the site with no automatic identity to the targeted page, leading to the page crash (code depending on the identity fails). The sequence goes this way:
Original URL: /sitename/folder/?parameter1=value&parameter2=value
IIS issues the authentication challenge, and the authenticated user is shown in the logs - eg, domain\user
The request is then handled by folder/Default.aspx (default page as defined in IIS)
Default.aspx.cs inspects the query string, and routes the request to (eg) OtherPage.aspx via Response.Redirect.
OtherPage.aspx is requested, and the request is logged - with no authentication, and no challenge
OtherPage.aspx.cs crashes (no user credential)
I am trying to theorize how or why ASP.NET is even permitting the unauthenticated file request. I have tried to reproduce the behavior in a test environment, and have been unable to do so. I have suspected that "Automatic logon in Intranet zone" might have been disabled, or that stored local credentials may be present but somehow causing a conflict, but neither of those scenarios panned out. The former did result in a failed authentication attempt and a proper 401 response from the server (the target page was not fired in a test environment).
Further research into this question has led to a solution if not a 100% dissection of the cause.
The users experiencing the problem were accessing the target site via a link in an email message. The link, for some unknown reason, inhibited the credential exchange between IE and IIS until the site URL was placed in the "Local Intranet" sites list of IE. This allowed the "Automatic logon in Intranet sites only" option to apply which, in turn, allowed the authentication to work.
The reason this is not a "100% dissection" is because these users were accessing the site previously, wherein authentication worked when the site was accessed conventionally. Exactly how the email message link inhibited the authentication exchange is not known. At the moment, I theorize that some security setting inhibits authentication when originating from an email link unless the specific site URL is explicitly qualified as a trusted or Intranet site.
Thanks for your consideration.

Preventing Cross-Site Request Forgery Attacks With a Windows Client Application

I develop an web application by web api2.
My client application is a windows application that developed by C# , .net 4.0.
The client application sends some Json data to the web api application and the application stores data in database.
Now the issue is sending the request with another method except my application and sending dump data to the server.I have authentication on the server but it isn't enough,I need some tokens for handling this issue.
After some searches i find this article and read it, but the client is a web application.
Could i use this method in my windows client app?how?
Bottom line: You shouldn't need to.
By definition, CSRF attacks can only affect client applications that share cookies across domains. e.g. if you visit www.bank.com with your browser and then open another tab to www.evil.com, if www.bank.com does not protect against CSRF then www.evil.com may be able to POST a form submission to www.bank.com while you are logged in and then transfer money by forging the request to the form's action URL on the transfer money page.
If your client is a Windows application, the HTTP client should not have cookies stored for any other service other than your web API.
Note that the above only applies to when cookies are used as the session management mechanism (i.e. not Kerberos, NTLM, Basic Auth, etc).
.I have authentication on the server but it isn't enough
This should be enough as an attacker cannot forge a HTTP request to your API that will be sent along with the victim's cookies as the cookies are separated due to there being different instances of web clients. Much like being logged into Google on Chrome, but then accessing Google on Firefox - you will not share the same logged in session.
Of course, protect your API with HTTPS so the information is encrypted whilst in transit. Note that this does not protect against decompilation of your source code, which is something that is not easy to prevent. At the end of the day you cannot trust clients that are not under your control. You can make it difficult, but not impossible to prevent someone working out or changing what is being sent to your API.
Cross site anti-forgery tokens are a form of authentication. It authenticates the client who's sending the request: the client has to visit a certain page to get the token from the server, so it cannot be any client who has not visited that page and some how just send random data to that server.
The purpose of authentication is for the server to authenticate the client (other way around is also possible, but let's forget that for the moment). You setup the system such that it is very difficult for others to pretend to be your Windows Form app. Note it can be very difficult, but theoretically it's always possible to fake. So the aim is to setup an auth such that an attacker considers it impractical to launch an attack.
This auth should not be mixed up with the authentication to verify the human user. They are different. An app can provide a UI for human users to login, but the app is not written by you. So you need to authenticate 2 things:
the request actually comes from your app, if that succeeds, then
the human user is who he claims he is, otherwise
reject the request

Tamper with http requests programmatically

I need to sign in to a site, I can do this via an url such as url.com/ssorequest?parameters=123. If this is typed in the address im signed in in and gets redirected to the portal.
Now Im supposed to do this through a http post request programmatically but I cant get it to work, I get redirected to a sign in form instead of the portal, ie I dont get signed in.
I used Fiddler to find out what the difference between the two methods was. I found that a couple of behind-the-scenes get-requests were different. The browsers get-requests sends cookie data to the server and fiddlers post-request does not.
When I use fiddler to repeat the browsers first call it doesnt send the cookie data either. So it only works when I do it via the browser window. Unless I use breakpoints in fiddler and tamper with the requests to include the cookie data.
Q: Why does it behave differently from the browser with both http post and when the request is done from Fiddler?
Q: Is there any way to tamper with the requests going out programmatically in my C# app without writing my own Fiddler application?
Most probably you have encountered an anti-forgery cookie. It works in a way to ensure that you are signing in using the page that was first requested and loaded in the browser and the cookie is valid for one request only and hence the fiddler will not be able to log in if you run the same request again.
Using C#, you first have to request the sign-in page and get the cookies provided with this page in a cookie container. Next time, when you post the page along with data, you have to make sure that the cookie is attached with the request.
Edit:
Step1: Browse any page on the site. This will initiate the session. It will also give you the session cookie.
Step2: Request the sign in page. send the cookie obtained in step one along with sign in page so that it can recognize the session. This step is critical. At this stage, there can be either of two things depending on the security system site is using. Either it will send a security cookie along with session cookie or it will add a hidden variable in the form along with a value which serves as security token. Make sure that you get this token/cookie.
Step2: Post the login information on the sign in page (or whatever page the form action leads to) along with the cookie/token obtained in step 1. If it is a token, include it in your post data along with login information or if a cookie, add it to request.

URL rewrite in ASP.NET application

How do I redirect url based on register client in c# .net or asp.net 4.0. For example if client registers as "client1" and our website is www.mycompany.com for every page client proceeds should get www.client1.mycompany.com.
More detailed example:
For example another client created is Client2. The pages i have created in general is like
"www.mycompany.com/product.aspx"
"www.mycompany.com/categories.aspx" should be shown as
"www.client2.mycompany.com/product.aspx" and
"www.client2.mycompany.com/categories.aspx" respectively
I have searched on web and found for static pages or using Gloabal.asax during startup of application but haven't found any thing after user logged in.
I have done something similar before in a few sites and there are a couple methods you could use. Assuming that you have a url setup so that all subdomains ( *.url.com) will send any user to your server and you have IIS setup to handle them all (i.e. no host header required, just IP) in the same site you can use one of the following methods:
After login simply send the user to that url. Since .Net won’t care the url the server knows how to render it, then it should be that simple. This assumes all your navigation uses relative paths and you must enable cookie sharing for that domain. This is required if the cookie for login was give on 1.url.com and you send them to 2.url.com You can share cookies in the same domain, requires a little work, but can be done.
Create a generic login page that does a web service request back to the server to see if the user can login. If he or she can have it send back to the browser a command, along with the correct url, that tell the clients browser to post directly to that sites login page (send username, password). This will login them into their site and assign the cookies correctly all from one simple login page. You could even make an external login page that only exists for this purpose. In the end all the generic page did was see if they could login and the sent their credentials to the correct page that did the login. I recommend this be done in a post with ssl for security reasons.
I hope that makes since.
There's a project called UrlRewritingNet which I use - it's pretty old but the source is available so you could recompile it for 4.0.
Link is at http://urlrewriting.net/149/en/home.html

Easiest way to get web page source code from pages that require logins -- C#

So I play an online game that's web based and I'd like to automate certain things with it using C#. Problem is that I can't simply use WebClient.DownloadData() because I need to be logged in to actually recieve the source. The other alternative was to use the built-in web browser control but that doesn't give me access to source code. Any suggestions?
I don't think NetworkCredentials will work in all cases. This only works with "Basic" or "Negotiate" authentication.
I've done this before with an internal website for some load testing, but sounds like you are trying to "game" the game. For that reason I won't go into details but the login to the site is probably being done in the form of an HTTP POST when you hit the login button.
You'd have to trap the POST request and replicate it in your code and make sure that your implementation maintains the session state as well, because if the game site is written well at all it will make sure that the current session has logged in before doing anything game related.
You can set the login credentials on the webclient using its Credentials property before calling DownloadData:
WebClient client = new WebClient();
client.Credentials = new NetworkCredential("username", "password");
EDIT: As mjmarsh points out, this will only work for sites that use a challenge-response method of authentication as part of a single request (I'm so used to dealing with this at work, I hadn't considered the other types!). If the site uses forms authentication (or indeed any other form of authentication), this method will not work as the authentication is not part of a single request - multiple requests are needed that you will need to handle yourself.
Network credentials will not work as mjmarsh has already pointed out.
While web scraping we come across lot of pages where login is needed. One of the approaches I use is install fiddler and monitor the POST and GET packets while manually logging in the site. This allows you to find out how the browser emulates the login. Then you need to recreate the same process by Code.
For example, most web servers use cookies to assume the session is authenticated. So you can use the credentials to post UserName and Password on the web site and record the Cookie. This cookie can then be used to access any further details on the web site.
Please check following link to check out more about Advanced Web Scraping:
http://krishnan.co.in/blog/post/Web-Scraping-Yahoo-Mail.aspx
In this blog, you will find how to authenticate into Yahoo account and then read the page after authentication.

Categories