Is there any port specific cookie in asp .net [duplicate] - c#

I have two HTTP services running on one machine. I just want to know if they share their cookies or whether the browser distinguishes between the two server sockets.

The current cookie specification is RFC 6265, which replaces RFC 2109 and RFC 2965 (both RFCs are now marked as "Historic") and formalizes the syntax for real-world usages of cookies. It clearly states:
Introduction
...
For historical reasons, cookies contain a number of security and privacy infelicities. For example, a server can indicate that a given cookie is intended for "secure" connections, but the Secure attribute does not provide integrity in the presence of an active network attacker. Similarly, cookies for a given host are shared across all the ports on that host, even though the usual "same-origin policy" used by web browsers isolates content retrieved via different ports.
And also:
8.5. Weak Confidentiality
Cookies do not provide isolation by port. If a cookie is readable by a service running on one port, the cookie is also readable by a service running on another port of the same server. If a cookie is writable by a service on one port, the cookie is also writable by a service running on another port of the same server. For this reason, servers SHOULD NOT both run mutually distrusting services on different ports of the same host and use cookies to store security sensitive information.

According to RFC2965 3.3.1 (which might or might not be followed by browsers), unless the port is explicitly specified via the port parameter of the Set-Cookie header, cookies might or might not be sent to any port.
Google's Browser Security Handbook says: by default, cookie scope is limited to all URLs on the current host name - and not bound to port or protocol information. and some lines later There is no way to limit cookies to a single DNS name only [...] likewise, there is no way to limit them to a specific port. (Also, keep in mind, that IE does not factor port numbers into its same-origin policy at all.)
So it does not seem to be safe to rely on any well-defined behavior here.

This is a really old question but I thought I would add a workaround I used.
I have two services running on my laptop (one on port 3000 and the other on 4000).
When I would jump between (http://localhost:3000 and http://localhost:4000), Chrome would pass in the same cookie, each service would not understand the cookie and generate a new one.
I found that if I accessed http://localhost:3000 and http://127.0.0.1:4000, the problem went away since Chrome kept a cookie for localhost and one for 127.0.0.1.
Again, noone may care at this point but it was easy and helpful to my situation.

This is a big gray area in cookie SOP (Same Origin Policy).
Theoretically, you can specify port number in the domain and the cookie will not be shared. In practice, this doesn't work with several browsers and you will run into other issues. So this is only feasible if your sites are not for general public and you can control what browsers to use.
The better approach is to get 2 domain names for the same IP and not relying on port numbers for cookies.

An alternative way to go around the problem, is to make the name of the session cookie be port related. For example:
mysession8080 for the server running on port 8080
mysession8000 for the server running on port 8000
Your code could access the webserver configuration to find out which port your server uses, and name the cookie accordingly.
Keep in mind that your application will receive both cookies, and you need to request the one that corresponds to your port.
There is no need to have the exact port number in the cookie name, but this is more convenient.
In general, the cookie name could encode any other parameter specific to the server instance you use, so it can be decoded by the right context.

In IE 8, cookies (verified only against localhost) are shared between ports. In FF 10, they are not.
I've posted this answer so that readers will have at least one concrete option for testing each scenario.

I was experiencing a similar problem running (and trying to debug) two different Django applications on the same machine.
I was running them with these commands:
./manage.py runserver 8000
./manage.py runserver 8001
When I did login in the first one and then in the second one I always got logged out the first one and viceversa.
I added this on my /etc/hosts
127.0.0.1 app1
127.0.0.1 app2
Then I started the two apps with these commands:
./manage.py runserver app1:8000
./manage.py runserver app2:8001
Problem solved :)

It's optional.
The port may be specified so cookies can be port specific. It's not necessary, the web server / application must care of this.
Source: German Wikipedia article, RFC2109, Chapter 4.3.1

Related

C# - How to detect if website was visited

I want to make a program which detects if some website was opened/visited by the user, for example, facebook.com. It has to work regardless of the used web browser.
I thought about checking records in DNS Cache, it would work, but there is a problem - it will generate false positives. Why, because some pages contain facebook widgets. In this case, I don't need to visit fb to make facebook.com appears in my DNS cache, it will appear all the time I visit the website that contains fb widgets.
The second idea was looking for active TCP connections, but it doesn't work too.
The last idea was to sniff traffic. I made simple test in Wireshark and there is the same problem as in checking DNS cache records, more precisely false positives. Also, fb uses https protocol, so I can't see that simple their address, I have to obtain their IPs from DNS and then try to find them out in the sniffed traffic.
I have no more ideas how to solve this problem.
Have you thought about banning or tracking the IP address for facebook?
I did a nslookup for facebook.com and got:
nslookup facebook.com
Non-authoritative answer:
Name: facebook.com
Addresses: 2a03:2880:f001:1f:face:b00c:0:25de
31.13.76.68
My suggestion would to be using the Titanium Web Proxy, and utilize the OnRequest event in order to track calls to certain domains (stored in the SessionEventArgs.ProxySession.Request.Url) property in the OnRequest call. You can even modify the results / requests before they go out. However, be aware that this library does overwrite your current system proxy settings.

External (internet) Pub Sub

Lately I started to think about a solution to publish messages across the internet to subscribed clients I have. Our system is developed in C#.
We tried to use Redis, it works very good in terms of speed and accuracy, but very bad in terms of security, everyone can subscribe to everything, and the best I can do is:
1) Rename core-functions so they'll be unusable
2) Add authentication (but its per server, not per client)
I have 2 questions:
1) Can I do more in terms of Redis security? Can I set password per subscriber? per channel?
2) Are there any other solutions any of you is aware of?
Thanks!
Redis has no access control whatsoever almost (but just the generic AUTH), and even the planned ACL feature does not include explicit support for subscribing / posting to specific channels.
However... there is a surprising simple thing you could do, if you disable MONITOR and other commands that can be used in order to listen to other clients connections, which is, to use an HMAC in order to hash together the logical channel name with the password in order to obtain the real channel name, which is unguessable to clients that don't know the password.
This is the schema (but you should carefully consider if this is secure depending on your exact setup, connections encryption, set of enabled commands. I can just guarantee that it's crypto-hard to obtain the channel name, and that no channel name can be guessed at random).
For example the password for channel "foo" is "bar". To obtain the channel name you do:
HMAC-SHA256("bar","foo") -> 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
And you have your channel name.
IMPORTANT: Note that N clients with the same wrong (old) password will still be able to communicate. This should not be a problem in most setups since they can communicate anyway in this case just subscribing to the same channel name.
IMPORTANT2: If this is over the internet, you should tunnel all this over SSL or a VPN.
IMPORTANT3: On top of all that, make sure to also use AUTH as an additional authentication layer.

Auto login for two sites hosted on IIS at two diffrent ports

I have deployed two instance of same website on IIS on two different ports, one for testing and other for production. When i logged in to production site and after that i am going for logging in to testing site (without logging off the production site), the testing site use same session and do not ask me for username password.
I have same users in both sites.
Can anybody suggest me how can i tackle with this problem.?
Use in private/incognito browsing for testing. Or use different domains
You can also use a different web browser.
Internet explorer for testing site and Firefox for production
Surprised no one has suggested specifying Host headers that way you can use port 80 and both instances will not be confused because the hostname will be different and session cookies will not cross over between instances.
Apart from some configuration in IIS and modifications to your DNS server or just using Local Hosts there is no changes needed to your code or switching browsers. I use this technique all the time, have 10+ sites all running on an IIS Server using same port 80 but different host headers.
Example
IIS Website - www.mysite.com
Binding configuration, IP address used is purely for example purposes.
IP Address: 192.168.1.100
Host Header: www.mysite.com
Port: 80
Local Host File or DNS
IP address used is purely for example purposes.
192.168.1.100 www.mysite.com # Example DNS / Local Host Entry
Links
Configure a Host Header for a Web Site (IIS 7)
Ish's Example
Taken from comments below
IIS Website - www.abc.com:85
Binding configuration
IP Address: 127.0.0.1
Host Header: www.abc.com
Port: 85
Local Host File or DNS
IP address used is purely for example purposes.
127.0.0.1 www.abc.com # Ish's Local Host Entry
Additional option to what #logikal suggest would be adding session variable to identify which port is in use. And act based on value of this variable. And since you using cookies add that value to cookies name as well so browser could differentiate which one to use.
Each instance will have unique identifier based on value of that variable.
Easy to implement in header and basically bullet proof and you do not need to jump from one browser to another.

C# - How to detect whether a website has a Shared or Dedicated IP Address?

Is it possible to detect whether a website has a dedicated or shared ip address from it's url using C# (Windows Forms application) ? I want to implement a functionality in my application to let write a web address in a TextBox than i click on the Test button. and then show ( Success ) MessageBox if the site has a Dedicated ip address or show a ( Failure ) MessageBox otherwise.
How can i detect whether a website has a Shared or Dedicated IP Address using C#.NET?
You can try, but you'll never have a good result. The best I think you could do is to check the PTR records of the IP, and then check if there are associated A records from different websites. This would still suck however, since a website could have two seemingly different domains that pertain to the same organization (googlemail.com/gmail.com for example).
Also, this assumes the existence of PTR records, multiple ones. I don't think I've seen such a setup supported by most VPS/sharing hosting.
Well, the way I would do it is:
Send HTTP GET to the URL and save the result.
Resolve the URL to an IP.
Send HTTP GET to the IP and save the result.
Compare the two results. (You can do sample checks between the two result)
If the results are the same, then this is dedicated hosting, if the result is different then this is a shared hosting.
Limitations for this method that I can think of now:
Will take you time to figure our a proper comparing method for the
two results.
If shared hosting is configured to default route to the site which you are checking.
Functions to resolve URLs, and do web requests for different programming languages are scattered across the Internet.
From a technical standpoint, there's no such thing as a "shared" or "dedicated" IP address; the protocol makes no distinction. Those are terms used to describe how an IP is used.
As such, there's no programmatic method to answer "is this shared or dedicated?" Some of the other answers to this question suggest some ways to guess whether a particular domain is on a shared IP, but those methods are at best guesses.
If you really want to go down this road, you could crawl the web and store resolved IPs for every domain. (Simple, right?) Then you could query your massive database for all the domains hosted on a given IP. (There are tools that seem to do this already, although only the first one was able to identify the multiple domains I have hosted on my server.)
Of course, this is all for naught with VPS (or things like Amazon EC2) where the server hardware itself is shared, but every customer (domain) gets one or more dedicated IPs. From the outside, there's no way to know how such servers are set up.
TL;DR: This can't be done in a reliable manner.

HttpWebRequest to different IP than the domain resolves to

Long story short an API I'm calling's different environments (dev/staging/uat/live) is set up by putting a host-record on the server so the live domain resolves to their other server in for the HTTP request.
The problem is that they've done this with so many different environments that we don't have enough servers to use the server-wide host files for it anymore (We've got some environments running off the same servers - luckily not dev and live though :P).
I'm wondering if there's a way to make WebRequest request to a domain but explicitly specify the IP of the server it should connect to? Or is there any way of doing this short of going all the way down to socket connections (Which I'd really prefer not to waste time/create bugs by trying to re-implementing the HTTP protocol).
PS: I've tried and we can't just get a new sub-domain for each environment.
One way to spoof a HTTP host header is to set a proxy to the actual server you'd like the request sent to. Something like
request.Proxy = new WebProxy(string.Format("http://{0}/", hostAddress));
may well work.
There are ways to control configuration values.
You have conditional compilation enabled in .NET, in which you can create your configuration sets and create directives that can use specific domain instead of changing its resolution strategy. For example, in debug, you can use x.com and in release mode you can use y.com, wherever you need to reference your url.
Web.config and app.config now supports changes as per the configuration selected, you can have web.debug.config and web.release.config and you can specify different url references here.

Categories