How to create a open a webpage as a popup using c# and need to run a function when the popup window is closed. my intention is to create a web login/logout and run a function after successful completion of the event
Well you've not given much away, but if I'm guessing that your architecture is ASP.NET, then you should have events on the server in your page's codebehind that can process that event. If you expand a bit on your requirements we can help you out a bit more.
Just for completeness, you should know that you can't just run C# code in a browser with html/javascript. You could run a Silverlight application but I don't think that's what you're after.
To summarise, make a web request and respond to it on the server. Popups are just webpages, so the architecture there is the same. When the request comes back, you can then run JS to close the popup and make the main browser window do something.
Personally I'd just have the main browser do the login, popups are cumbersome for users in web apps.
If you are using jQuery, I would strongly suggest using ThickBox http://jquery.com/demo/thickbox/ We use it in every single projects we do and it work very weel and it's easy to modify to have it do what you want.
You can use it to load another aspx page where your login code would reside and then pay particual attention to
function tb_remove() {
Which is called on close. This is where we added our code to return data to the page.
The short answer is, you can't. C# runs on the server and opening a popup window is a client side action. You will need to have JavaScript in your rendered markup to open the popup window when appropriate, or an anchor tag with target="_blank".
However, I agree with the other answers that popup windows are more of a pain than they are worth, they annoy users and lead to window management issues that are not always easy to solve especially when popup blockers are involved. A DOM based modal dialog is almost always a better solution.
I would go with Neil... Sorry to say this but you are exactly the type of person Jeff Atwood was talking about when he wrote this article...
http://www.codinghorror.com/blog/archives/001296.html
I would suggest you take the time to learn the difference between client and server functionality, languages and technology.
I would also suggest you listen to Neil, your usability skills also need serious work.
LOL - and if you think I'm being cruel, think how cruel you're being to your users... login in a pop up window... bah
Related
Corodva, Javascript and HTML5 developers.
I need to intercep all get requests made by the WebBrowser component in Windows Phone 8.0 at any point and be able to view the resource that it's requesting. To give an example of "all", this is what I mean.
I have a simple application that contains a cordova(WebBrowser control with Decorators that allows XHRrequests to be retrieved from local storage) view, and navigates to an index.html file.
The index.html contains only the following in the body
<img src="logo.png" />
This file is loaded and displayed by the web browser but there is not interceptable request made through the web browser to the Windows Phone that I can see. The file just appears in the browser magically. I know that cordova overwrites all XHRrequests to swap out for local files. No method in the XHRHelper.cs is being hit with a request for logo.png. Here is everything I have tried just so we can all be on the same page.
Subscribed to the Navigating, Navigated and NavigationFailed (Just because there's no other option) events to see if it fires at any point to load logo.png. This turned out nothing, it only fired for actual navigation calls made within the application. I also subscribed to all, I literally mean all events available from the WebBrowser control. Even ones that are inherited from UIElement.
Tried to wrap the WebBrowser in a COM Wrapper that "monitors" all outgoing traffic using http://www.codeproject.com/Articles/157329/Http-Monitor-for-Webbrowser-Control which does not work for Windows Phone 8.0 Web browser control. Was still a good exercise. It also does not get requests made for local files.
The next option I checked out is a way of intercepting all requests from JavaScript for anything, but found many posts that only explain how to intercept all AJAX requests, which is not what I want to do.
I then implemented the ways of intercepting all AJAX requests to see if it will give some insight on what I could possibly do. Nothing panned out from this exercise. I then also did this [How do you 'intercept' all HTTP requests in an EmberJs app?. Ant that also did not help, I then looked at intercept.js and tried to use that, but again the logo.png slipped the grasp of intercept.js aswell
Me being a Windows Phone .NET developer and not at all experienced in JavaScript except for 6 months of wrapping HTML5 apps into cordova I returned back to the windows phone code trying to catch the navigation as it leaves the WebBrowser control. I tried to override all the methods that were specific to a WebBrowser and tried to cancel any and all requests, just to see if logo.png would still appear in the browser and it did :(
I wouldn't ask this question if I didn't do any research on this subject. Some of the JavaScript devs said that they don't think it's possible from inside the application, many of the C++ developers said I should look at the native code implementation for the WebBrowser control, find out what interfaces it extends and get hold of it for extension somehow. I will be attempting to do this tomorrow all day, but would like to not overkil the situation if there's (hopefully) an easy way of doing this.
My next step is to use a tool like fiddler or charles to monitor all packets through a proxy. If I can see requests made for the local files through any of these tools then there must be a way to intercept those requests in code. If this is successfull I will attempt to set up a local proxy at runtime and redirect through my filehandler.
I spoke to some iOS developers and they used NSURLProtocol, which you just have to set up then you can monitor all you traffic (Lucky). Is there anything like this for Windows Phone 8.0? Does anyone have suggestions on how I could implement this for Windows Phone 8.0? Is there any way I can intercept all requests from a html5 app. Any way would be fine, I'm fairly confident that I'll be able to implement any suggestion and give feedback if it does not work. The biggest question would be if it's even possible.
Any feedback would be appreciated, and any suggestions will be followed through. And I will provide feedback on that suggestion.
Thank you in advance, I know there are some serious Code Ninjas on here that will give me a million options :)
I have found a very simple solution to this problem. Since Mobile IE10 does not provide functionality to intercept requests made by a webpage I moved off that path and chose to not intercept the requests but rather redirect it.
I did this by setting up a socket server on the phone and requesting the assets in the HTML5 application from the localhost. Here's an example:
A file that I used in the index.html like below
<img src="logo.png" />
I changed to
<img src="http://localhost:99/logo.png" />
This way you get a Process request event fired in the socket server where you can handle your asset request appropriately. You can take the logo.png and give back a image with a different name by using a simple mapping in the socket server, which is what I needed to do.
I hope this helps someone dealing with the same problem :)
I would like to build a bot - web crawler - to collect phone numbers.
I have a problem though: to see the phone number, a user must click something like "Show".
How can I solve this problem?
Check what the act of clicking on the button does. Does it call a Javascript function? Does that make an HTTP call to a backend? If so your bot should do that call instead of screen-scraping the first page. If not, does it just play with the DOM of the page to show an item on screen?
All the data you're looking for comes from some sort of back-end, so if you look in the developer tools of your browser when going through the page you can usually figure out what calls to script in order to get the data.
It is possible to make this harder (and that is what some sites to to protect themselves from scraping). Typically if you're in this situation, what you're doing is not entirely legal or nice. But technically it's very interesting, so here goes.
The best way to go forward is to run the site in a real browser (like PhantomJS, or Chrome) and use a framework like Webdriver to simulate browser interactions. This way you can pull most of the data out usually.
If you find that your ip gets blocked, you may use Tor and use multiple instances dynamically to hit the site... but make sure you ask the site owner nicely if you're allowed to do that of course.
I have Windows Forms Application that Updates its GUI from a website using WebClient's GET Requests; However some of these values are updated in the web page using JavaScript so user don't have to keep refreshing the page to get them. How could i make my program get those values without having to keep sending new GET Requests.
The best way to do exactly what you want is to reverse engineer the javascript that updates the values on the page you're scraping. Beyond that, I'm afraid what you're doing is the best we can do.
On the plus side, javascript is nothing more than plain text source code so you can take a peek at it. But the legality of doing so depends on where you are. In most places including the US, just looking at the online code is legal. Reproducing it is not. But as the judge in the Oracle vs Google case said: it doesn't make sense to apply copyright to a single function (I'm paraphrasing, he said "range_check" not "a single function").
If the javascript is obfuscated then copy paste it into a pretty printer. Just Google for "javascript pretty printer". There are lots of them online.
You say that you want to be able to do something in C# like you do in JavaScript, but you don't want to have to "keep sending new GET Requests". The thing is, that's exactly what the JavaScript is doing. It just happens to be doing it asynchronously behind the scenes. You can do the very same thing with C#. The JavaScript is just doing GET or POST requests behind the scenes, and you can do the very same requests with C#.
Or you can simply set a timer to GetElementById from a hidden web browser
Background:
I am creating a Windows Form App that automates order entry on a intranet Web Application. We have a large amount of order entry that needs to be done that will cost us a lot of money so I volenteered to automate the task.
Problem:
I am using the webbrowser class to navigate the web app. I have gotten very far but reached a road block. There is a part in the app that opens a web dialog page. How do I interact with the web dialog. My instance of the webbrowser class is still with the parent page. I am hoping someone can point me in the right direction.
You've got a number of options. To expand on the answers from others and add a new idea...
Do it using the webbrowser control: This is technically possible by either injecting javascript into the target page as demonstrated here or creating a JavaScript object and using it as a bridge via the webbrowser.objectforscripting property. This is very fragile - something as simple as the website changing an element's Id could break it. You also need to make sure your code doesn't interfere with the functioning of the form (clashing function names, etc...)
Do it using a postback: Monitor the communications between the web browser and the server (I personally prefer Firfox/Firebug but ie/Fiddler or Chrome/F12 are both good too). As long as you can replicate the actions of the browser exactly, the server can't know the difference. The problem here is that browsers are complex and the more secure a form is, the more demanding servers are. This means you may have to fake a login, get cookies, send them back on subsequnt requests, Handle Viewstate data and xss prevention variables. It's possible and it's far more robust than the first option but can be a pain to get working. If it's not a highly secure form,, this is your best bet. More information here
Do it by browser automation: Selenium is probably the best option here (as mentioned by others) but suffers from a similar flaw to the webbrowser control in that it's sensitive to changes on the form itself (But not as mcuh so as the webbrowser control).
Incidentally, if you have Visual Studio Ultimate/Test edition (and some others, not sure which), it includes a suite of testing tools including an excellent engine to automate load testing a website. This is also superb for tracking down what exactly a form does as you can see every step of the emulation.
Hope this helps
You have two choices depending of the level of complexity you need:
Use a HTTP Debugger like Fiddler to find out the POST data you
need to send to each page and mimic it via a HttpWebRequest.
Use a Browser Automation Tool like Selenium and do the job.
NOTE: Your action may be considered as spamming by the website so be ready for IP blocking, CAPTCHA...
You could give Selenium a go: http://seleniumhq.org/
UI automation is a far more intuitive approach to these types of tasks.
Let me rephrase the question...
Here's the scenario: As an insurance agent you are constantly working with multiple insurance websites. For each website I need to login and pull up a client. I am looking to automate this process.
I currently have a solution built for iMacros but that requires a download/installation.
I'm looking for a solution using the .NET framework that will allow the user to provide their login credentials and information about a client and I will be able to automate this process for them.
This will involve knowledge of each specific website which is fine, I will have all of that information.
I would like for this process to be able to happen in the background and then launch the website to the user once the action is performed.
You could try the following tools:
StoryTestIQ
Selenium
Watir
Windmill Testing Framework
Visual Studio Web Tests
They are automated testing tools/frameworks that allow you to write automated tests from a UI perspective and verify the results.
Use Watin. It's an open source .NET library to automate IE and Firefox. It's a lot easier than manipulating raw HTTP requests or hacking the WebBrowser control to do what you want, and you can run it from a console app or service, since you mentioned this wouldn't be a WinForms app.
You can also make the browser window invisible if needed, since you mentioned only showing this to the user at a certain point.
I've done this in the past using the WebBrowser control inside a winforms app that i execute on the server. The WebBrowser control will allow you to access the html elements on the page, input information, click buttons/links, etc. It should allow you to accomplish your goal.
There are ways to do this without the WebBrowser control, look at the HTML Agility Pack.
Assuming that you are talking about filling and submitting a form or forms using a bot of some sort then scraping the response to display to the user.
Use HttpWebRequest(?) to create a form post containing the relevant form fields and data from your model and submit the request.
Retrieve and analyse the response, store any cookies as you will need to resubmit the cookie on the next request.
Formulate the next request based on the results of the first request ( remembering to attach cookies as necessary ) and submit it.
Retrieve the response and display or parse and display ( depending on what you are hoping to achieve ).
You say this is not a client app - therefore I will assume a web app. The downside of this is that once you start proxying requests for the user, you will have to always proxy those requests as there is no way for you to transfer any session cookies from the target site to the user and there is no ( simple / easy / logical ) way for the user to log in to the target site and then transfer the cookie to you.
Usually when trying to do this sort of integration, people will use some form of published API for interacting with the companies / systems in question as they are designed for the type of interactions that you are referring to.
It is not clear to me what difficulty you want to communicate when you wrote:
I currently have a solution built for
iMacros but that requires a
download/installation.
I think here lies some your requirements about which you are not explicit. You certainly need to "download/install" your .Net program on your client's machines. So, what's the difference?
Anyway, Crowbar seems promising:
Crowbar is a web scraping environment
based on the use of a server-side
headless mozilla-based browser.
Its purpose is to allow running
javascript scrapers against a DOM to
automate web sites scraping but
avoiding all the syntax normalization
issues.
For people not familiar with this terminology: "javascript scrapers" here means something like an iMacros' macro, used to extract information from a web site (in the end is a Javascript program, for what purpose you use it I do not think makes a difference).
Design
Crowbar is implemented as a (rather
simple, in fact) XULRunner application
that provides an HTTP RESTful web
service implemented in javascript
(basically turning a web browser into
a web server!) that you can use to
'remotely control' the browser.
I don't know if this headless browser can be extended with add-ons like a normal Firefox installation. In such case you can even think to use yours iMacros' macros (or use CoScripter) with appropriate packaging.
The more I think about this, more I feel that this is a convoluted solution for what you wrote you want to achieve. So, please, clarify.