Finding more information about browser versions with C#/ASP.Net - c#

First, some background to my problem.
There are many versions of Internet Explorer 6 and 7 that do not support more than 20 Key-Value pairs in a cookie. I have a list of full versions that do and do not support this. This is fixed in a windows update, but it's not possible for me to force the users of my app to carry out windows update in order to use my app.
We have developed a different cookie jar for versions of Internet Explorer that do not support this, however the performance of this is not optimal, and therefore we need to only use this on versions of IE that require it.
The full version number of an IE browser is in the format 6.00.2900.2180. Everywhere I have found suggests using Request.Browser to find out browser information, but this is far too limited for my needs. To clarify this, MajorVersion returns 6, and MinorVersion returns 0, giving me 6.0 (6.0 is the version of pretty much every version of Internet Explorer 6 that exists). So what I need is the third and fourth parts (or at the very least, the third part) of the full version.
So, does anyone know of a way, in ASP.Net with C#, to find out the information I need? If someone has looked extensively into this and found it to impossible, that is fine as an answer.

You may need to revisit why you're storing so many different key-value pairs. Going low-tech, couldn't you concatenate the values into fewer or maybe even a single key? What sort of values are you storing--in a cookie?

Try grabbing the "User-Agent" request header using Request.Headers

Copying this from meandmycode to accept it as answer.
IE doesn't specify the long version
number in the user-agent header so you
have absolute no chance of detecting
this other than sending a 'snoop' page
with javascript to detect the complex
version number.. but doing something
like that is dodge city, and
javascript may not be able to find the
full version either.

Related

C# Possible to search Google by image and downloading the first one?

Is there a way to programatically upload an image file to search in Google, and then downloading the first one (the one with best resolution)?
EDIT: The Google Search API would not work for me, as I would have much more than 100 requests per day, and I am not willing to pay, since I am not a company
Yes, there is. The Google Custom Search API allows you to submit queries (including images) and retrieve results programmatically. There are even client libraries available for multiple languages.
EDIT: After OP changed his question, basically saying that he doesn't want to use the Google API, I can only refer to this(a bit outdated) question and quote the Google Terms of Service:
1.4 Appropriate Conduct. You shall not, and shall not allow any third party to: ... (i) directly or indirectly generate queries, or
impressions of or clicks on Results, through any automated, deceptive,
fraudulent or other invalid means (including, but not limited to,
click spam, robots, macro programs, and Internet agents);
So to recap, it is possible, but it is only legal via the API I linked above.

How to capture visited URLs and their html by any browsers

I want to find a decent solution to track URLs and html content that users are visiting and provide more information to user. The solution should bring minimum impacts to end users.
I don't want to write plugins for different browsers. It's hard to maintain.
I don't accept proxy method, since I don't want to change any of user's proxy settings.
My application is writen in C# and targeting to Windows. It's best if the solution can support other OS as well.
Based on my research, I found following methods that looks working for me, but all of them have their drawbacks, I can't determine which one is the best.
Use WinPcap
WinPcap sniffers all TCP packets without changing any of user settings but only requires to install the WinPcap setup, which is acceptable to me. But I have two questions:
a. how to convert TCP packet into URL and HTML
b. Does it really impact the performance? I don't know if sniffer all TCP traffic is overhead for this requirment.
Find history files for different browsers
This way looks like the easist one, but I wonder if the solution is stable. I am not sure if the browser will stably write the history and when it writes to. My application will popup information before the user leave the current page. The solution won't work for me if browser writes to history file when user close the browser.
Use FindWindow or accessiblity object or COM interface to find the UI element which contains the URL
I find this way is not complete, for example, Chrome will only show the active tab's URL but not all of them.
Another drawback is that I have to request the URL another time to get its HTML content.
Any comment or suggestion is welcome.
BTW, I am not doing any spyware. The application is trying to find all RSS feeds from web page and show them to end users. I can easily do that in a browser plugin but I really want to support multiple broswers with single UI. Thanks.
Though this is very old post, I thought to just give an input.
Approach 1 of WinPcap is the best one. This will work for any browser, even builtin browser of any other installed application. The approach will be less resource consuming too.
There is a library Pcap.Net that has HTTP parser. You can construct http stream and use its httpresponsedatagram to parse the body that can be consumed by your application.
This link helped giving more insight to me -
Tcp Session Reconstruction with Winpcap

Add A "Check for Updates" Option

I would like to know, how to check for updates for my own developed software. I mean when a user uses the program and clicks on check for updates, what should I do and how to download the new application? Is it parsing the website download page, or using a database?
I would like to know how to achieve this in both C++ and C#.
There are quite literally 100's of different ways of doing this. So you need to put a little more thought into what your application platform/architecture is capable of (i.e. do you have net access, what is the technical level of your typical user etc). As examples of a solutions:
At startup your app checks a webservice for the latest version number and then compares to the current install. If the version is different then the user has an option to download and install the newer version. This could be done by either getting the use to go to a specific web address or you could download and execute it for them.
You also have the option of using something called Clickonce, this will effectively handle all this for you. This is a big subject so Google is your friend on this one, but as a starter have a look at:
http://msdn.microsoft.com/en-us/library/t71a733d%28VS.80%29.aspx

Windows App spellcheck

I was wondering if there is another way to spell check a Windows app instead what I've been of using: "Microsoft.Office.Interop.Word". I can't buy a spell checking add-on. I also cannot use open source and would like the spell check to be dynamic..any suggestions?
EDIT:
I have seen several similar questions, the problem is they all suggest using open source applications (which I would love) or Microsoft Word.
I am currently using Word to spell check and it slows my current application down and causes several glitches in my application. Word is not a clean solution so I'm really wanting to find some other way.. Is my only other option to recreate my app as a WPF app so I can take advantage of the SpellCheck Class?
If I were you I would download the data from the English Wiktionary and parse it to obtain a list of all English words (for instance). Then you could rather easily write at least a primitive spell-checker yourself. In fact, I use a parsed version of the English Wiktionary in my own mathematical application AlgoSim. If you'd like, I could send you the data file.
Update
I have now published a parsed word list at english.zip (942 kB, 383735 entries, zip). The data originates from the English Wiktionary, and as such, is licensed under the Creative Commons Attribution/Share-Alike License.
To obtain a list like this, you can either download all articles on Wiktionary as a huge XML file containing all Wiki- and HTML-formatted articles. This is then more or less trivial to parse. Alternatively, you can run a bot on the site. I got help to obtain a parsed file from a user at Wiktionary (I seem to have forgotten his name, though...), and this file (english.txt in english.zip) is a further processed version of the file I got.
http://msdn.microsoft.com/en-us/library/system.windows.controls.spellcheck.aspx
I use Aspell-win32, it's old but it's open source, and works as well or better than the Word spell check. Came here looking for a built in solution.

C# - how to download only the modified part of an HTML

I'm using C# + HttpWebRequest.
I have an HTML page I need to frequently check for updates.
Assuming I already have an older version of the HTML page (in a string for example), is there any way to download ONLY the "delta", or modified portion of the page, without downloading the entire page itself and comparing it to the older version?
Only if that functionality is included in the web server, and that's pretty unlikely. So no, sorry.
Not for any given page, no.
But if you wrote a facility to give you the differences based on a timestamp or some kind of ID, then yes. This isn't anything standard. You'd have to create a feed for the page using syndication, or create a web service to satisfy the need. Of course, you have to be in control of the web server you want to monitor, which may not be the case for you.
The short answer is, no. The long answer is that if the HTML is in version control and you write some server side code that, given a particular version number, gives you the diff between the current version and the specified version, yes. If the HTML isn't in version control and you just want to compare your version to the current version, then either you need to download the current version to do the comparison on the client or upload your version to the server and have it do the comparison -- and send the difference back. Obviously, it's more efficient just to have your client re-download the new version.
Set IfModifiedSince property of HttpWebRequest.
This won't give you 'delta', but will reply with 301 if the page was not modified at all.
You have the old version and the server has the new version. How could you download just the delta without knowing what has been changed? How could the server deliver the delta without knowing which old version you have?
Obviously, there is no way around downloading the entire page. Or uploading the old version to the server (assuming the server has a service that allows that), but that would only increase the traffic.
Like the other answers before me, There is no way to get around the download.
You can however not parse the html if it the same by creating a hash for each page revision and comparing the current hash with the new hash. Then you would use a diff algorithm to extract only the 'delta' information. I think most modern crawlers do something along these lines.
If the older versions were kept on the web server, and when you requested the delta, you sent a 'version number' or a modified date for the version that you have, theoretically the server could diff the page and send only the difference. But both copies have to be on one machine for anybody to know what the difference is.
You could use the AddRange method of the HttpWebRequest Class.
With this you can specify a byte range of the ressource you want to download.
This is also used to continue interrupted http downloads.
This is no delta but you can decrease traffic if you only load the parts that change.

Categories