I am working on a website where all other locator doesn't work expect using FindElements and take the 3rd a element. So I was curious to try xpath the first time.
I could get the xpath in chrome, but when I use in xpath, it says element not found.
I did a lot of search, still couldn't' find out what was wrong. So I tried in facebook page and use the login field as a try, the xpath is //*[#id="email"], it works perfectly in chrome, but same result in webdrive.
C# code: driver.findElement(By.xpath("//*[#id='email']"));
Please click for facebook picture and its location
Any advise?
I can give a complete solution on Python taking into account the features of React Native (used on Facebook)
But, you have C#. Therefore, it is possible to use a similar function driver.execute_script (execution of Javascript on Selenium)
driver.get("https://www.facebook.com/")
driver.execute_script('
document.getElementById("email").value = "lg#hosct.com";
document.getElementById("u_0_2").click();
')
I did another try with a more clear code:
driver.Url = "";
driver.findElement(By.xpath("//*[#id='email']"));
It works now, the only difference between this and my code before is: I was visiting some other pages before the facebook page. This seems to make difference. Anyway, above code works. If I encounter the issue again, I will post more detail code.
Related
Is there a easy way to get a element from chrome without using Selenium? Just pure C# code
I was thinking to somehow get the current tab HTML source code and get all the element values that way. Does anybody have any idea how to do this? And it needs to be a already active browser. So i can't use any HTTP request
You're looking for a C# interface to the Chrome DevTools Protocol. There is such a thing, unsurprisingly called ChromeDevTools. The included sample shows you some DOM navigation code.
I'm trying to scrape a particular webpage which works as follows.
First the page loads, then it runs some sort of javascript to fetch the data it needs to populate the page. I'm interested in that data.
If I Get the page with HtmlAgilityPack - the script doesn't run so I get what it essentially a mostly-blank page.
Is there a way to force it to run a script, so I can get the data?
You are getting what the server is returning - the same as a web browser. A web browser, of course, then runs the scripts. Html Agility Pack is an HTML parser only - it has no way to interpret the javascript or bind it to its internal representation of the document. If you wanted to run the script you would need a web browser. The perfect answer to your problem would be a complete "headless" web browser. That is something that incorporates an HTML parser, a javascript interpreter, and a model that simulates the browser DOM, all working together. Basically, that's a web browser, except without the rendering part of it. At this time there isn't such a thing that works entirely within the .NET environment.
Your best bet is to use a WebBrowser control and actually load and run the page in Internet Explorer under programmatic control. This won't be fast or pretty, but it will do what you need to do.
Also see my answer to a similar question: Load a DOM and Execute javascript, server side, with .Net which discusses the available technology in .NET to do this. Most of the pieces exist right now but just aren't quite there yet or haven't been integrated in the right way, unfortunately.
You can use Awesomium for this, http://www.awesomium.com/. It works fairly well but has no support for x64 and is not thread safe. I'm using it to scan some web sites 24x7 and it's running fine for at least a couple of days in a row but then it usually crashes.
Well i used to use htmlagilitypack as well as xPath to scrap some info from websites but i have read that css selectors are much faster so i searched for good engine for css and i found CsQuery; However, i am still confused as i don't know how to get the css path of an element.
In xPath i have used a firefox plugin called xPath checker that returned a fine xPaths like this
id('yt-masthead-signin')/button
But i can't find an equivalent one for CSS. So if someone helped my i will really appreciate it because i don't find and answer on google for my question specifically.
Install Firebug + Firepath
Click the selecting button to select something on the page, then it can generate either xpath or css selector. However, you need some changes to make the generated ones more efficient.
I have a results page only working in IE. It is developed using C# and js in visual studio. So I select a search parameter from the drop down list and search. The results from the DB are displayed in a results page. Those results seem to only be displayed when I use IE. Chrome and fireFox allow for everything else to work except the results:/
Any ideas what could be occurring? Something i need to check with my web.config perhaps?
Thank you in advance=)
C
This is likely an html issue and unrelated to ASP.NET. You should examine the generated HTML. It will be especially easy to see if the data is in the DOM by using Chrome and Firebug.
In Chrome (since it's a place where it's not working) bring up the page and press CTRL+SHIFT+I - this will bring up the DEVELOPER TOOLS. Once up, attempt to use the page and watch the CONSOLE tab of the developer tools. You most likely have scripting errors and the Console will point them out. Many times, you can even click on the console-report to go directly to the offending code (but sometimes you cannot). Regardless, the developer console should help you find the trouble.
If it's a CSS issue, the first tab will be the most helpful to you instead - you can find the generated code in HTML and click on it, then all CSS styles will be on the right and you can review them (and even change them if you need to for testing purposes) to find and eliminate trouble items.
Hi all hopefully this is a quick one.
I'm working on a c# web browser that, amongst other things, changes css styles of web pages like google and facebook. for example it will make the background on facebook and google red instead of white. i've had success but its not at all consistent and i have no idea why.
HtmlDocument doc = Browser1.Document;
HtmlElement textElem = doc.CreateElement("DIV");
textElem.InnerHtml = "<STYLE>body{background-color:red!important}</STYLE>";
doc.Body.AppendChild(textElem);
that code works on www.google.com but not www.rationality.tk however..
HtmlElement head = Browser1.Document.GetElementsByTagName("body")[0];
head.SetAttribute("bgcolor", "red");
That code works on www.rationality.tk but not www.google.com and both codes do not work on www.facebook.com which i cannot get anything to work on.
I'm probably doing something wrong and I just moved on to c# after giving up on c++ and I have found it a lot easier but still getting the hang of it. thank you in advance.
EDIT: PROBLEM SOLVED
HtmlElement head = Browser1.Document.GetElementsByTagName("body")[0];
head.Style = "background-color:red";
this works on facebook, google and rationality.tk
I am not very sure but the problem might be due to embedded iframes and igoogle like widgets. I have faced the same problem while implementing a similar functionality in COM/ATL for my Browser Helper Object. The approach that helped me (to some extent) was to wait for DOCUMENT_COMPLETE event for each iframe and then try to get their body. Again I am not sure how you can achieve the same in C#. This is just some food for your thoughts.