i am automating using selenium webdriver and C#. Is there a way to capture all urls that my browser navigates to while my Selenium automation tests run using an external tool such as Fiddler core / wireshark. I mean while my tests continue to run, I would like some of these tools to capture my urls parallely so that incase my tests fail, i could investigate further by using the final few urls(from the point of failure) to debug the issue.
Is this really possible. Do I need to use a separate thread to one of these tools(Fiddler/wireshark/any other tool) to capture the url?
Can this really be done
There are a few options.
Start wireshark (or fiddler) before your Selenium test kicks off. You can do this with a batch file that gets executed in your test setup.
You can utilize a browser plugin for fiddler. IE has one, I'm not sure if there is a comparible plugin for all browsers though. Then you can get Selenium to activate this through the browser...assuming fiddler keeps in the browser window and not open a separate non-browser window that Selenium can't see.UPDATE: Fiddler plugins don't stay in the browser window so this option won't work.
Write some wrapper code that does a driver.Url and stores it into a list. This wrapper code would check to see if the driver.Url is different from the last stored entry in the object and if it is different then it would add it to the list.
All have pros and cons. 3 would give you the most control as your test itself would gather the URL's and maintain a list in code that you can do what you want with. 1 of course would give you the most robust details, depending on how you setup wireshark, and you can profile the entire machine and network experience. 2 is a middle ground where your test still drives it, but the results are separate...but being part of the browser you would have to avoid cleanup after your tests...if you have more tests than one execute at a time this could cause alot of problems...
Related
I want to test reactJS App with coded UI and play it back with Chrome browser. I'm recording my actions on IE, I have all selenium components and browser drivers installed, my chrome version is latest(tried older versions too). Scenario of my test is to type both login and password, log in then log out. So when i run my test on IE everything works perfectly. But if i run it on Chrome it types everything logs in, so when its on the main page with loads of components it wouldn't find log out button, saying it couldn't find controls with given info(tags, ids, classes). But when i inspect element with the info from test it matches perfectly.
Never mind folks, Solved the issue. Turns out Chrome doesn't wait for all components to load unlike IE. So as long as it clicked on login button it was going to search for that component not even waiting for page to load. So I just set some conditions telling to wait until everything loaded then try to look for components.
Our Selenium tests were developed in C# and were running just fine for months but recently we noticed that a number of tests started failing when executed with Firefox WebDriver.
After investigating the test results and executing tests locally we noticed that from time to time clicks on random elements are executed (we can tell because the visual state of the button or the link changes to what looks like a clicked element)
Browser console does not indicated any errors. WebDriver logs show that click was executed.
Will be grateful for any help.
Edit:
Version of Selenium WebDriver - 2.53.0
Versions of Firefox - (tried few) 33.0.1, 43.0.1, 45.0, 46.0.1
Firefox scale 100%
tried with native events on and off
tried with additional implicit waiting before click
You question is not very specific, so I'll try to offer possible ways you can choose to resolving it.
You didn't indicate which driver and browser versions you used. If you didn't observe failures for months ant suddenly they appeared, my first guess would be that FF version you use on test machine(s) got updated (or driver version in tests was changed), and new combination can work differently. I had situation like this when tests behavior changed, updating driver version helped.
Another option would be to try and see which webelements get misclicked more often than others and insert instructions that check if they are displayed before executing actual click.
Also, try to do step-by-step debugging (if you haven't already) and see if you observe wrong clicks then
One thing we've seen in our testing is that if we're clicking around on the VM while a Selenium test is running on our VM, it can actually prevent clicks from going off.
But another thing we've encountered is that the clicks are often not working where they should be, so you can counter this by using JavaScript clicks instead of Selenium.
For elements that fail regularly, switch the element.Click() to a method utilizing the code below:
IJavaScriptExecutor executor = driver as IJavaScriptExecutor;
executor.ExecuteScript("arguments[0].click();", ElementToClick)
So the problem was that click does not work even when clicking manually so we'll be investigating in that direction.
I want to monitor changes in background in complex web application. This is one-page application with many scripts and so on. I need to be logged in to have access to data I want to monitor.
I tried to use webrequest, but I think that the application is to complex to do it that way. There is also a problem with authentication.
I also tried WebBrowser component, but web application is telling me, that this browser is too old and I should get newer one.
Perfect solution would:
Open this web application in chrome (or some other modern browser) in background
Save the page to memory
Extract values using something like HtmlAgilityPack
While this will be happening I want to normally use the computer (so opening chrome window is not a good solution for me).
Is there any way to achieve something like that?
if you can cope with an extra browser running, have a look at SeleniumHQ. with its webdriver-backed selenium you can start a dedicated browser instance and perform user actions by coding in high-level programming languages like java. it should not interfere your manual work at all, but will take up the same amount of memory and cpu time your "real" browser would.
if the web application has no captcha and does not object to automated script accessing it, you could also login in a background program by sending appropriate HTTP requests and parse the response. python's urllib2 would be my first choice.
if you dont want any additional processes running, you could also create a browser plugin, that autorefreshs and parses a certain open tab every few seconds.
I'm trying to make my tests run faster on a dedicated server. I've noticed, that normally the tests run sluggishly, but when I increase firefox priority (which by default is lower than normal), they run much faster.
I was looking for a setting in FirefoxDriver which would let me choose process priority, but I can't find one.
Can anyone point me to how to set web driver priorities in selenium?
I disagree with why you are doing this, and I think simply changing the priority is not the way to solve your issue.
There is no API exposed to do this, so you could send a request off to the Selenium developers for this (http://code.google.com/selenium).
Due to this, you will have to change the priority process after Selenium has created a browser session.
You will need to find the process:
var fireFoxProcesses = Process.GetProcessesByName("firefox");
This will return an array of Process objects, however, if you are running one test after another, there should only be one firefox.exe process open. This is my assumption. Therefore, we get the actual process object:
// should only be one, unless you are opening a few tests in concurrently.
var actualFirefoxProcess = fireFoxProcesses.First();
Finally, change it's priority class:
actualFirefoxProcess.PriorityClass = ProcessPriorityClass.High;
I would guess this can get a little unreliable though.
Edit
As for differentiation of a 'user created' Firefox, and one run by Selenium, you can look at the parent process of the firefox processes. That is, what process launched the Firefox process?
No point in copying code, but this solution worked well for me: How can I get the PID of the parent process of my application ...this then gets tricky because a user can launch Firefox multiple ways, but if they are using a shortcut/start menu list item, the parent process will be explorer.
You've not mentioned what solution you are using for running the tests. Whether it's through Visual Studio's Test Runner, NUnit's own GUI, TeamCity, CruiseControl, Jenkins, TFS or some other CI solution, but you'll need to check what launched the Firefox process in order to determine whether it was a "user created" Firefox instance or one from Selenium tests.
I have created an Html 5 page that provides important server-side functionality. Unfortunately, it must be run in an Html 5 browser (Chrome, IE9, or Firefox) with a canvas to produce the results I need. It is completely self contained, taking needed parameters through the URL, and is ready to be closed when the OnLoad event is ready to send. So far so good.
The following process needs to be automated (no human eyes or interaction) and will be run from within a web service (not run from within a browser). Ideally, I don't want to waste extra cycles with busy wait, or delay the result by waiting for long time periods simply hoping the process has finished. I need to:
Open a browser (preferably Chrome) with a URL, using C#.
Wait for the page to completely finish loading - ideally receiving a callback of some kind.
Close the browser page when finished, again with C#.
We've tried using IE9. There is C# support to launch IE9, Wait until not Busy, and gracefully Close the browser; however, the page loads resources asynchronously (there is no way around this), and so we get the signal that it is no longer busy during the resource load - instead of when the page has finished. Adding busy wait would consume valuable server-side cpu cycles.
A simple Create Process call would be nice, but would only work if the browser could close itself with some html - but thanks to security measures in the browsers, I can't find a reliable way to use html commands to close a browser that was launched from command-line (I did see you can close tabs spawned from an already opened page - firefox only, but this doesn't help).
Does anyone know how I can accomplish this goal? Again - there is no human involvement in any of the process, no human eyes will ever see the page or interact with it in any way. The page only runs on the server machine, and will never be deployed to a client machine.
I would suggest to use the WebBrowser control to load the HTML. Once you get the data back, use an ObjectForScripting to call a c# method to notify when done.
See http://www.codeproject.com/Tips/130267/Call-a-C-Method-From-JavaScript-Hosted-in-a-WebBro
You dont really have to even show the webbrowser control.
Let me know if you have any questions. Hope it helps!
Automating the browser - thats what Selenium does. I think it will be a good fit for the task, and there's good C# support. It can even run the browser on a remote machine using the Selenium RC server.