I've been thinking about this and can't seem to find a way to do this:
I've got some code running on WatiN 2.0 which connects to a site via an SSL tunnel, and after performing certain tasks (which there're no other feasible ways to automate without relying on a browser) should be able to download an image from the very same SSL connection. The image is served dynamically depending on some state generated during navigation, and is not served but through the SSL connection associated with the aforementioned state, so I really need to stick with WatiN + IE.
Thanks in advance
If I understand you correctly, you are trying to go to a web page (via multiple steps) and then save a copy of an image (dynamically generated) on that page right?
If so, I don't think there's a way to do this built in to WatiN, but I stumbled across a thread on the WatiN mail list archive which may help.
Basically it looks like you can use WatiN to dynamically generate some javascript to run against your page and copy the image to the clipboard & then grab the image from the clipboard in your test code.
Hope that is of some help to you...
Related
I'm trying to scrape a particular webpage which works as follows.
First the page loads, then it runs some sort of javascript to fetch the data it needs to populate the page. I'm interested in that data.
If I Get the page with HtmlAgilityPack - the script doesn't run so I get what it essentially a mostly-blank page.
Is there a way to force it to run a script, so I can get the data?
You are getting what the server is returning - the same as a web browser. A web browser, of course, then runs the scripts. Html Agility Pack is an HTML parser only - it has no way to interpret the javascript or bind it to its internal representation of the document. If you wanted to run the script you would need a web browser. The perfect answer to your problem would be a complete "headless" web browser. That is something that incorporates an HTML parser, a javascript interpreter, and a model that simulates the browser DOM, all working together. Basically, that's a web browser, except without the rendering part of it. At this time there isn't such a thing that works entirely within the .NET environment.
Your best bet is to use a WebBrowser control and actually load and run the page in Internet Explorer under programmatic control. This won't be fast or pretty, but it will do what you need to do.
Also see my answer to a similar question: Load a DOM and Execute javascript, server side, with .Net which discusses the available technology in .NET to do this. Most of the pieces exist right now but just aren't quite there yet or haven't been integrated in the right way, unfortunately.
You can use Awesomium for this, http://www.awesomium.com/. It works fairly well but has no support for x64 and is not thread safe. I'm using it to scan some web sites 24x7 and it's running fine for at least a couple of days in a row but then it usually crashes.
following scenario: We've developed around 400 personal sites and we are currently trying to build our portfolio. Due to multiple reasons we would like to display the index so we can put it on our portfolio. First thought was to make programatically screenshots of every site. The heads in our company promptly debunked it because they want to show it live. Iframes are not an alternative apparently. So we have to download the index. Possibly only with the styles and images needed to display it properly.
I am unsure on how to start doing this.
Do you guys have any ideas?
The underlying technology of CodedUI (and Selenium) uses a web crawler to isolate specific useful parts of a web page. I recommend using that underlying library to crawl your webpages running live, and extract whatever images and divs make up your page structure.
You can then emit these as static HTML to make page snapshots suitable for a site index.
Doing it this way means you will be using the same technology as you use for test automation, but instead of running tests, you can extract the useful structure from your HTML and emit it as a page snapshot. You will have to mark the "useful" parts of your HTML to enable the crawler to extract just the items you think should be indexed (i.e. include a data- property if HTML5). This might be a lot of work - so if you just need a screenshot of each of your pages, just use Selenium or CodedUI to crawl your sites and capture the screen image.
How can I get Picture Previews to work with IE 8 and up?
Can I get binary image data from an input type "file", with JavaScript/jQuery?
If I can just get the data (in the right format) back to the server, I should be able to work with it there, and then return it with AJAX (although, I am absolutely no AJAX expert).
There is, according to the research that I have done, NO WAY to get picture previews in all IE versions using only javascript (this is because getting the full file path is seen, by them, as a potential security risk). I could ask my users to add the site to the trusted sites, but you don't usually ask users to tamper with those kinds of low-level settings (not to mention the quickest way to make your site seem suspicious to users is to ask them to directly add your site to the trusted sites list. That's like sending an email and asking for a password. "Just trust me! I'm soooo safe!" :)
I have picture previews working in everything except IE and have no problem using conditional comments to separate an IE specific way of doing this from the way I am doing it with other browsers. In other words, the answer doesn't even have to be cross-browser, just cross-IE (8 and 9). I know I have seen IE sites use picture previews before (somehow), so I know there must be at least ONE way to do this...
So if you need to support IE lower than 10 you could upload the file to the server using some of the existing AJAX upload components (Uploadify, Plupload, Valums AJAX Upload, Bleuimp, ...), generate and store a thumbnail on the server and send the url to the saved image to the client using JSON so that it could display it using an tag. Actually since IE supports Data URI Scheme you don't need to store the uploaded file to the server in order to generate the preview. You could directly return the resulting thumbnail image from your Preview controller action formatted as Data URI Scheme so that you could show it on the client.
Another solution if you don't have the time and resources to implement this functionality is to simply tell your users that if they want to get a realtime preview of the image that they should consider using a different web browser because your site doesn't support IE for this.
I'm developing a web (using asp.net and c#) which has a FileUpload control from asp.net. The upload thing works perfect and as far as I know I can't show progress data (%, bytes transfered, upload speed, time elapsed, time left, progress bar) using the FileUpload control from asp.net because its not asyncrhonous.
I've searched a lot (really) on the internet and I didn't find what i'm looking for and too much info has become a big confusion since I'm not sure about what I have to use.
On my web page I have a file named "UploadFile.aspx" which has a FileUpload control and a button that handles the uploading. On code-behind (UploadFile.aspx.cs) I have all the server-side logic (Upload the file into specific folder, store info about that file into a database, etc. etc) and I don't want to change this.
What I need to know is how to show the progress data to the user while is uploading the file? I can't use 3rd party applications because this is for an important commercial site. It's not a problem for me if I have to learn javascript / jQuery / Whatever but really i'm a bit lost and I don't know how to start.
Thanks for your time and your help guys.
There's some pretty cool solutions out there. Granted, you can code your own, but I'd suggest using a jQuery plugin like Plupload. If you need help setting it up, you can read their documentation.
There are lots of lots of demo code are available on the net to show the progress bar with file upload control in c#, most of them work fine on Local system but never work on the live server, Because You CAN'T USE A FileUpload control for what you want to do. When a user POSTs a file, you have to think of it like a querystring parameter. It goes as one Http Request. If you want to do a progress bar you'll want to look into something that can interact with the server asynchronously.
If you don't want to use any 3rd party that relies on Flash / Html 5, please take a look at this article:
http://vanacosmin.ro/Articles/Read/AjaxFileUpload
This is possible (and if you're using .NET 4.5 GetBufferedInputStream will make your life easier), but it is not very easy, as you'll see.
Basically, if you want a file upload with progress bar that is fully compatible with every browser, you need to handle this server side and give an url where the client (the browser) can check periodically for the progress with ajax.
I want to find a decent solution to track URLs and html content that users are visiting and provide more information to user. The solution should bring minimum impacts to end users.
I don't want to write plugins for different browsers. It's hard to maintain.
I don't accept proxy method, since I don't want to change any of user's proxy settings.
My application is writen in C# and targeting to Windows. It's best if the solution can support other OS as well.
Based on my research, I found following methods that looks working for me, but all of them have their drawbacks, I can't determine which one is the best.
Use WinPcap
WinPcap sniffers all TCP packets without changing any of user settings but only requires to install the WinPcap setup, which is acceptable to me. But I have two questions:
a. how to convert TCP packet into URL and HTML
b. Does it really impact the performance? I don't know if sniffer all TCP traffic is overhead for this requirment.
Find history files for different browsers
This way looks like the easist one, but I wonder if the solution is stable. I am not sure if the browser will stably write the history and when it writes to. My application will popup information before the user leave the current page. The solution won't work for me if browser writes to history file when user close the browser.
Use FindWindow or accessiblity object or COM interface to find the UI element which contains the URL
I find this way is not complete, for example, Chrome will only show the active tab's URL but not all of them.
Another drawback is that I have to request the URL another time to get its HTML content.
Any comment or suggestion is welcome.
BTW, I am not doing any spyware. The application is trying to find all RSS feeds from web page and show them to end users. I can easily do that in a browser plugin but I really want to support multiple broswers with single UI. Thanks.
Though this is very old post, I thought to just give an input.
Approach 1 of WinPcap is the best one. This will work for any browser, even builtin browser of any other installed application. The approach will be less resource consuming too.
There is a library Pcap.Net that has HTTP parser. You can construct http stream and use its httpresponsedatagram to parse the body that can be consumed by your application.
This link helped giving more insight to me -
Tcp Session Reconstruction with Winpcap