C# Get element in chrome browser WITHOUT Selenium. Just pure C# code - c#

Is there a easy way to get a element from chrome without using Selenium? Just pure C# code
I was thinking to somehow get the current tab HTML source code and get all the element values that way. Does anybody have any idea how to do this? And it needs to be a already active browser. So i can't use any HTTP request

You're looking for a C# interface to the Chrome DevTools Protocol. There is such a thing, unsurprisingly called ChromeDevTools. The included sample shows you some DOM navigation code.

Related

HtmlAgilityPack table returns null when selecting nodes [duplicate]

I'm trying to scrape a particular webpage which works as follows.
First the page loads, then it runs some sort of javascript to fetch the data it needs to populate the page. I'm interested in that data.
If I Get the page with HtmlAgilityPack - the script doesn't run so I get what it essentially a mostly-blank page.
Is there a way to force it to run a script, so I can get the data?
You are getting what the server is returning - the same as a web browser. A web browser, of course, then runs the scripts. Html Agility Pack is an HTML parser only - it has no way to interpret the javascript or bind it to its internal representation of the document. If you wanted to run the script you would need a web browser. The perfect answer to your problem would be a complete "headless" web browser. That is something that incorporates an HTML parser, a javascript interpreter, and a model that simulates the browser DOM, all working together. Basically, that's a web browser, except without the rendering part of it. At this time there isn't such a thing that works entirely within the .NET environment.
Your best bet is to use a WebBrowser control and actually load and run the page in Internet Explorer under programmatic control. This won't be fast or pretty, but it will do what you need to do.
Also see my answer to a similar question: Load a DOM and Execute javascript, server side, with .Net which discusses the available technology in .NET to do this. Most of the pieces exist right now but just aren't quite there yet or haven't been integrated in the right way, unfortunately.
You can use Awesomium for this, http://www.awesomium.com/. It works fairly well but has no support for x64 and is not thread safe. I'm using it to scan some web sites 24x7 and it's running fine for at least a couple of days in a row but then it usually crashes.

Xpath works in Chorome but not in Selenium web-driver

I am working on a website where all other locator doesn't work expect using FindElements and take the 3rd a element. So I was curious to try xpath the first time.
I could get the xpath in chrome, but when I use in xpath, it says element not found.
I did a lot of search, still couldn't' find out what was wrong. So I tried in facebook page and use the login field as a try, the xpath is //*[#id="email"], it works perfectly in chrome, but same result in webdrive.
C# code: driver.findElement(By.xpath("//*[#id='email']"));
Please click for facebook picture and its location
Any advise?
I can give a complete solution on Python taking into account the features of React Native (used on Facebook)
But, you have C#. Therefore, it is possible to use a similar function driver.execute_script (execution of Javascript on Selenium)
driver.get("https://www.facebook.com/")
driver.execute_script('
document.getElementById("email").value = "lg#hosct.com";
document.getElementById("u_0_2").click();
')
I did another try with a more clear code:
driver.Url = "";
driver.findElement(By.xpath("//*[#id='email']"));
It works now, the only difference between this and my code before is: I was visiting some other pages before the facebook page. This seems to make difference. Anyway, above code works. If I encounter the issue again, I will post more detail code.

Capture failed loads of contents using Selenium

When loading a web page, it executes many GET request to fetch resources, such as images, css files, fonts and other stuff.
Is there a possibility to capture failed GET requests using Selenium in C#?
Selenium does not natively provide this capability. I'm coming to this conclusion for two reasons:
I've not seen any function exported by Selenium's API which would allow doing what you want in a cross-platform way.
(I'm saying "cross-platform way" because I'm excluding from considerations possible non-standard APIs that could be exported by one browser but not others.)
If there is any doubt I may have missed something, then consider that ...
The Selenium team has quite consciously decided not to provide any means to get the response code of the HTTP request that downloads the page in the first place. It is extremely doubtful that they would have slipped behind the scenes a way to get the response code of the other HTTP requests that are launched to load other resources.
The way to check on any such requests is to have the Selenium browser launched by Selenium connect through a proxy that records such responses. Or to load the page with something else than Selenium.

Get info from current webpage in Chrome

I am member of a website and want to grab some info from the current open window in Chrome. That is, if I am looking a persons profile in Chrome, I want my C# program to be able to get the source code of that website so I can retrieve birthday, location, etc from it. Is there a way to do this?
I guess a solution is to incorporate the webbrowser control in a winforms project and use that instead of chrome. but it would be nicer if I could just use Chrome as I normally do and then when I switch to my C# program it copies the source code and parses whatever info in it that I find relevant.
You can try to use the C# version of Selenium for automating your browser. It's mainly designed for testing, but it should help you solving your problem. It comes with a driver for Google Chrome.

How to access IE XHTML DOM+JS engines without starting the browser itself

I'm trying to build a headless browser in c#. c# has plenty of classes, which are supposed to make this possible, like, for example JScriptCodeProvider.
I am looking to get IE XML DOM classes for the JavaScript code to work with. Can anyone tell me where to find those, and, if possible, to provide me with a workable example for what I'm trying do to?
Use the webbrowser control. That should get you everything you need.

Categories