C# - How to detect if website was visited - c#

I want to make a program which detects if some website was opened/visited by the user, for example, facebook.com. It has to work regardless of the used web browser.
I thought about checking records in DNS Cache, it would work, but there is a problem - it will generate false positives. Why, because some pages contain facebook widgets. In this case, I don't need to visit fb to make facebook.com appears in my DNS cache, it will appear all the time I visit the website that contains fb widgets.
The second idea was looking for active TCP connections, but it doesn't work too.
The last idea was to sniff traffic. I made simple test in Wireshark and there is the same problem as in checking DNS cache records, more precisely false positives. Also, fb uses https protocol, so I can't see that simple their address, I have to obtain their IPs from DNS and then try to find them out in the sniffed traffic.
I have no more ideas how to solve this problem.

Have you thought about banning or tracking the IP address for facebook?
I did a nslookup for facebook.com and got:
nslookup facebook.com
Non-authoritative answer:
Name: facebook.com
Addresses: 2a03:2880:f001:1f:face:b00c:0:25de
31.13.76.68

My suggestion would to be using the Titanium Web Proxy, and utilize the OnRequest event in order to track calls to certain domains (stored in the SessionEventArgs.ProxySession.Request.Url) property in the OnRequest call. You can even modify the results / requests before they go out. However, be aware that this library does overwrite your current system proxy settings.

Related

Is there any port specific cookie in asp .net [duplicate]

I have two HTTP services running on one machine. I just want to know if they share their cookies or whether the browser distinguishes between the two server sockets.
The current cookie specification is RFC 6265, which replaces RFC 2109 and RFC 2965 (both RFCs are now marked as "Historic") and formalizes the syntax for real-world usages of cookies. It clearly states:
Introduction
...
For historical reasons, cookies contain a number of security and privacy infelicities. For example, a server can indicate that a given cookie is intended for "secure" connections, but the Secure attribute does not provide integrity in the presence of an active network attacker. Similarly, cookies for a given host are shared across all the ports on that host, even though the usual "same-origin policy" used by web browsers isolates content retrieved via different ports.
And also:
8.5. Weak Confidentiality
Cookies do not provide isolation by port. If a cookie is readable by a service running on one port, the cookie is also readable by a service running on another port of the same server. If a cookie is writable by a service on one port, the cookie is also writable by a service running on another port of the same server. For this reason, servers SHOULD NOT both run mutually distrusting services on different ports of the same host and use cookies to store security sensitive information.
According to RFC2965 3.3.1 (which might or might not be followed by browsers), unless the port is explicitly specified via the port parameter of the Set-Cookie header, cookies might or might not be sent to any port.
Google's Browser Security Handbook says: by default, cookie scope is limited to all URLs on the current host name - and not bound to port or protocol information. and some lines later There is no way to limit cookies to a single DNS name only [...] likewise, there is no way to limit them to a specific port. (Also, keep in mind, that IE does not factor port numbers into its same-origin policy at all.)
So it does not seem to be safe to rely on any well-defined behavior here.
This is a really old question but I thought I would add a workaround I used.
I have two services running on my laptop (one on port 3000 and the other on 4000).
When I would jump between (http://localhost:3000 and http://localhost:4000), Chrome would pass in the same cookie, each service would not understand the cookie and generate a new one.
I found that if I accessed http://localhost:3000 and http://127.0.0.1:4000, the problem went away since Chrome kept a cookie for localhost and one for 127.0.0.1.
Again, noone may care at this point but it was easy and helpful to my situation.
This is a big gray area in cookie SOP (Same Origin Policy).
Theoretically, you can specify port number in the domain and the cookie will not be shared. In practice, this doesn't work with several browsers and you will run into other issues. So this is only feasible if your sites are not for general public and you can control what browsers to use.
The better approach is to get 2 domain names for the same IP and not relying on port numbers for cookies.
An alternative way to go around the problem, is to make the name of the session cookie be port related. For example:
mysession8080 for the server running on port 8080
mysession8000 for the server running on port 8000
Your code could access the webserver configuration to find out which port your server uses, and name the cookie accordingly.
Keep in mind that your application will receive both cookies, and you need to request the one that corresponds to your port.
There is no need to have the exact port number in the cookie name, but this is more convenient.
In general, the cookie name could encode any other parameter specific to the server instance you use, so it can be decoded by the right context.
In IE 8, cookies (verified only against localhost) are shared between ports. In FF 10, they are not.
I've posted this answer so that readers will have at least one concrete option for testing each scenario.
I was experiencing a similar problem running (and trying to debug) two different Django applications on the same machine.
I was running them with these commands:
./manage.py runserver 8000
./manage.py runserver 8001
When I did login in the first one and then in the second one I always got logged out the first one and viceversa.
I added this on my /etc/hosts
127.0.0.1 app1
127.0.0.1 app2
Then I started the two apps with these commands:
./manage.py runserver app1:8000
./manage.py runserver app2:8001
Problem solved :)
It's optional.
The port may be specified so cookies can be port specific. It's not necessary, the web server / application must care of this.
Source: German Wikipedia article, RFC2109, Chapter 4.3.1

How to get end user Machine name in IIS logs

IIS 7.0 and above. No load balancer involved in this setup. File being requested is a small spacer image which can be requested synchronously or aynchronous load using JQuery. The file is not important, It is just a way to get the end user to hit this IIS server for analytics.
I have a requirement to capture machine name of visitors from IIS logs. Current Log has client IP address already in there. Problem is IPs are short lived in our environment and if I don't resolve it to a machine name soon enough, it is not useful. So we need the machine name for visiting IP determined pretty much in real time.
What is a good approach to go about this. These are the options I found...
1) Enable reverse DNS lookup in IIS -> http://www.expta.com/2010/01/how-to-enable-reverse-dns-lookup-in-iis.html. This affects server performace and I am worried this will end up holding the user request and cause his page to load slow due to the increased expense of reverse lookup operation
2) Write a IIS log module that does enhances logging by doing a revere lookup of IPs and writing machine names in the log. >> I'm afraid this will slow the request turnaround time for end user and affect server performance due to the reverse DNS lookup. Pretty much I guess this is me doing point 1 above instead of relying to Microsoft's built in capability. At the end the realtime reverse DNS lookups will affect performance.
3) Same as point 1 or 2 above, but I will change the HTML of the page users are hitting to load the IIS hosted image file using a Async javascript call (as opposed to an inline call). That way end suer doesn't have to wait for this IIS request to complete and can haverest of the page (the content that matters to them) load without depending on the spacer image request to complete. But then browser will still dedicate one thread for the async image loading and it still is a performance hit for the end user.
4) Just use default IIS logging to log in real time. Have a separate C# app read the file every 5 minutes or so, detect the new lines added, parse them and get IP, do a reverse lookup and find machine name and log it to a database or flat file as requested. Flip side is that now I need to pretty much log in real time because if I don't log things immediately, the IP might have gotten assigned to a different machine by the time my application reads the log, finds it and does a reverse lookup on it. Also I have to deal with the complexity of reading the log file to read only newly inserted log entries after the previous read etc.
5) http://www.iis.net/learn/extensions/advanced-logging-module/advanced-logging-for-iis-real-time-logging -> I guess this is the same as point 2 above except it is written in VC++ instead of C#. So same disadvantages are there for this method also I guess
So every method out there seems to have downsides. What do you think is a good way to go around solving the problem?
Reversing IP to machine name is not possible due to the way routing works - many machines can come via the same IP.
If you found a way to map IP to machine name that is acceptable for you one approach could be to simply have site serving the image and doing all necessary discovery in normal request handler. This way you may also have more information about user (cookies, hauthentication headers, ...). This approach may be more flexible than configuring IIS logging.

C# - How to detect whether a website has a Shared or Dedicated IP Address?

Is it possible to detect whether a website has a dedicated or shared ip address from it's url using C# (Windows Forms application) ? I want to implement a functionality in my application to let write a web address in a TextBox than i click on the Test button. and then show ( Success ) MessageBox if the site has a Dedicated ip address or show a ( Failure ) MessageBox otherwise.
How can i detect whether a website has a Shared or Dedicated IP Address using C#.NET?
You can try, but you'll never have a good result. The best I think you could do is to check the PTR records of the IP, and then check if there are associated A records from different websites. This would still suck however, since a website could have two seemingly different domains that pertain to the same organization (googlemail.com/gmail.com for example).
Also, this assumes the existence of PTR records, multiple ones. I don't think I've seen such a setup supported by most VPS/sharing hosting.
Well, the way I would do it is:
Send HTTP GET to the URL and save the result.
Resolve the URL to an IP.
Send HTTP GET to the IP and save the result.
Compare the two results. (You can do sample checks between the two result)
If the results are the same, then this is dedicated hosting, if the result is different then this is a shared hosting.
Limitations for this method that I can think of now:
Will take you time to figure our a proper comparing method for the
two results.
If shared hosting is configured to default route to the site which you are checking.
Functions to resolve URLs, and do web requests for different programming languages are scattered across the Internet.
From a technical standpoint, there's no such thing as a "shared" or "dedicated" IP address; the protocol makes no distinction. Those are terms used to describe how an IP is used.
As such, there's no programmatic method to answer "is this shared or dedicated?" Some of the other answers to this question suggest some ways to guess whether a particular domain is on a shared IP, but those methods are at best guesses.
If you really want to go down this road, you could crawl the web and store resolved IPs for every domain. (Simple, right?) Then you could query your massive database for all the domains hosted on a given IP. (There are tools that seem to do this already, although only the first one was able to identify the multiple domains I have hosted on my server.)
Of course, this is all for naught with VPS (or things like Amazon EC2) where the server hardware itself is shared, but every customer (domain) gets one or more dedicated IPs. From the outside, there's no way to know how such servers are set up.
TL;DR: This can't be done in a reliable manner.

Detect new computer? On website

How do I detect when a new computer has logged into my website?
The public IP address can be the same since you can share more than 1 computer via 1 internet connection.
I could use cookies but this will only detect a new browser not a new computer! 1 computer can have IE, firefox, chrome! Etc.
I expect (and hope) that this is impossible. If my browser is transmitting information that identifies my machine, then I want a new browser. Likewise, you should probably not be expecting to be able to receive such information.
Update
Seems like I have to update my expectations: https://panopticlick.eff.org/
You can use browser finger printing to do a pretty darn good job of distinguishing between computers that visit your site. It won't be 100% perfect but not far short.
There is no unique way to identify visitors to your website. All types of cookies get deleted at some point. You might be tempted to use flash cookies, since they don't depend on the browser but I strongly recommend against it since there is a huge legal debate on them.
Your only solution is to use a heuristic based on all the information you can gather on your visitor. This is called browser fingerprinting. Check out http://panopticlick.eff.org/ for the latest research on this topic.
You can check for the HTTP_X_FORWARDED_FOR header which should contain the machines Class C address (eg: 192.168.0.10) provided it was forwarded by a proxy.
you can set a cookie on client and check it in session_start ,its not 100% solution but can be a solution

How to detect the current sharepoint pages from the client machine?

On the client machine I need to be able to somehow detect which sites the current user are looking at right now.
I know the base URL of the sharepoint app, say sharepoint.thecompany.net but how the hack do I get the last requested url from the server?
I have hit a dead stop when trying to iterate the current processes and the casting the iexplorer process to something I can work with, I just don't know which kind of object to cast the process to :-(
I hope to implement this logic in a C# assembly what should run on the client box.
Any other approach that might work?
Thanks in advance
WatiN will allow you to attach to IE instances and get the current url from them. It will also allow you to do the same with Firefox instances.
It might be more efficient however to try to get requested urls at the network level using a wireshark type concept where you are just listening to http traffic on the computer and keeping track of the urls but a solution like that is a bit over my head.
EDIT: I came across this while looking for a solution: http://www.codeproject.com/KB/IP/networkmonitor.aspx
From what I can see I would think you could adapt the monitoring code to monitor and look for http request packets and parse the headers for the url information you need.

Categories