I'm trying to read the value of a session ID which is served up to a client page (a pin that can then be given to other users who want to join the session), which according to chrome developer tools, is located within this element:
<input type="text" size="18" autocomplete="off" id="idSession" name="idSession" class="lots of stuff here" title="" ">
So far I've been using C# and Xpath to navigate around the site successfully, for testing purposes, but I just can't get hold of the pin that is generated within id="idSession", or by using any other identifier through Xpath. There's a bunch of jquery stuff going on in the background, but neither is it showing up there (again, the code knows about the on-screen locations in the .js files for the ID, but that's it).
I'm new to all of this so would really appreciate a nudge in the right direction, ie. what different tools I need for this, what am I missing etc. what I need to read up on.
Thanks a lot.
What about //input[#id='idSession']/#value to get the content
Also, including a link to a helper library for creating xpath using linq-esq syntax
var xpath = CreateXpath.Where(e => e.TargetElementName == "input" &&
e.Attribute("id").Text == "idSession").Select(e => e.Attribute("value"));
http://unit-testing.net/CurrentArticle/How-to-Create-Xpath-From-Lambda-Expressions.html
Related
I am coming back to work on a BOT that scraped data from a site once a day for my personal use.
However they have changed the code during COVID and now it seems they are loading in a lot of the content with Ajax/JavaScript.
I thought that if I did a WebRequest and obtained the response HTML from a URL, it should match the same content that I see in a browser (FF/Chrome) when I right click and "view source". I thought the actual DOM and generated source code would come later when those files were loaded as onload events fired, scripts lazily loaded and so on.
However the source HTML I obtain with my BOT is NOT the same as the HTML I see when viewing the source code. So my regular expressions that find certain links are not available to me.
Why am I seeing a difference between "view source" and a download of the HTML?
I can only think that when the page loads, SCRIPTS run that load other content into the page and that when I view source I am actually seeing a partial generated source rather than the original source code. Therefore is there a way I can call the page with my BOT, wait X seconds before obtaining the response to get this "onload" generated HTML?
Or even better a way for MY BOT (not using someone elses), to view generated source.
This BOT runs as a web service. I can find another site to scrape but it's just painful when I have all the regular expressions working on the source I see, except it's NOT the source my BOT obtains.
A bit confused at why my browser is showing me more content with a view source (not generated source), than my BOT gets when making a valid request.
Any help would be much appreciated this is almost an 8 year project that I have been doing on/off and this change has ruined one of the core parts of the system.
In response to OP's comment, here is the Java code for how to click at different parts on the screen to do this:
You could use Java's Robot class. I just learned about it a few days ago:
// Import
import java.awt.Robot;
// Code
void click(int x, int y, int btn) {
Robot robot = new Robot();
robot.mouseMove(x, y);
robot.mousePress(btn);
robot.mouseRelease(btn);
}
You would then run the click function with the x and y position to click, as well as the button (MouseEvent.BUTTON1, MouseEvent.BUTTON2, etc.)
After stringing together the right positions (this will vary depending on the screen) you could do just about anything.
To use shortcuts, just use the keyPress and keyRelease functions. Here is a good way to do this:
void key(int keyCode, boolean ctrl, boolean alt, boolean shift) {
if (ctrl)
robot.keyPress(KeyEvent.VK_CONTROL);
if (alt)
robot.keyPress(KeyEvent.VK_ALT);
if (shift)
robot.keyPress(KeyEvent.VK_SHIFT);
robot.keyPress(keyCode);
robot.keyRelease(keyCode);
if (ctrl)
robot.keyRelease(KeyEvent.VK_CONTROL);
if (alt)
robot.keyRelease(KeyEvent.VK_ALT);
if (shift)
robot.keyRelease(KeyEvent.VK_SHIFT);
}
Thus, something like Ctrl+Shift+I to open the inspect menu would look like this:
key(KeyEvent.VK_I, true, false, true);
Here are the steps to copy a website's code (from the inspector) with Google Chrome:
Ctrl + Shift + I
Right click the HTML tag
Select "Edit as HTML"
Ctrl + A
Ctrl + C
Then, you can use the technique from this StackOverflow to get the content from the clipboard:
Clipboard c = Toolkit.getDefaultToolkit().getSystemClipboard();
String text = (String) c.getData(DataFlavor.stringFlavor);
Using something like FileOutputStream to put the info into a file:
FileOutputStream output = new FileOutputStream(new File( PATH HERE ));
output.write(text.getBytes());
output.close();
I hope this helps!
I have seemed to have fixed it by just turning on the ability to store cookies in my custom HTTP (Bot/Scraper) class, that was being called from the class trying to obtain the data. Probably the site has a defense system to prevent visitors requesting pages and not the JS/CSS with a different session ID on each request.
However I would like to see some other examples because if it is just cookies then they could use JavaScript to test for JavaScript e.g an AJAX call to log if JS is actually on or some DOM manipulation to determine if you are really Human or not which would break it again.
Every site uses different methods to prevent scrapers, email harvesters, job rapists, link harvesters etc inc working out the standard time between requests for 100% verifiable humans and BOTS and then using those values to help determine spoofed user-agents etc. I wrote a whole system to stop BOTS at my last place of work and its a layered approach, just glad the cookies being enabled solved it on this site but it could easily be beefed up with other tricks to test for BOTS vs HUMANS.
I do know some Java, enough to work out what is going on anyway. My BOT is in C#.
I have posted the same question but I post it again since I haven't got any answers to that post yet.
I am trying to get some information (such as tagName, id using GetElementsByTagName method or GetElementById method) from a content page in a website using winforms.
as you see the pictures attached, no matter which selection you make (select1, select2, select3 etc) web address stays same. however, contents under those selections are different in content page.
I am trying to access to a tagName(or id) from one of them(not selections but contents under a specific selection).
I have debugged and figured out(or seems like) I can not access to tagName(or id) from any of those contents under a specific selection.
It seems like I can only access tagName(or id) from main page. picture 3 will help better explanation of some terms such as main page, content page.
I tried to explain in detail, if my question seems still not clear, let me know plz.
My code looks like this.
var countGetFile = webBrowser1.Document.GetElementsByTagName("IFRAME");
foreach (HtmlElement l in countGetFile)
{
if (l.GetAttribute("width").Equals("100%"))
{
MessageBox.Show(l.GetAttribute("height").ToString());
MessageBox.Show(l.GetAttribute("outerText").ToString());
}
}
I was not able to grab information under 2 down level of #document from html.
html looks something like
...
<src="..." id="A" ... >
#document
...
<src="..." id="B" ... >
#document
...
<span="C" ...>
...
I could grab span information (third curly brackets) with codes looking like
HtmlWindow frame1 = webBrowser1.Document.GetElementById("A").Document.Window.Frames["A"];
HtmlWindow frame2 = frame1.Document.GetElementById("B").Document.Window.Frames["B"];
foreach (HtmlElement elm in frame2.Document.All)
{
if (elm.GetAttribute("tagName").Equals("C"))
{
// your command
}
}
to use Document.Window.Frames you need a header using "System.Collections";
btw, there is a problem. When I try to access to the information in third curly bracket, I need to do some kinds of work between frame1 and frame2 such as delaying for frame2 to have enough time to be able to access to next level after frame1.
I figured a kind of hack to get it through. Place a messagebox to pop up for short time delay, or place a delay function( not freeze ) with async code looking like,
async Task PutTaskDelay()
{
await Task.Delay(5000);//5 secs
}
I just found a temporary solution for accessing to second level. I will appreciate anyone who knows some ways to solve this problem.
I'm using SAP NetWeaver (SAP WEB UI) and found it difficult to automate with selenium.
I can say that a big amount of objects' Id's\Names are actually being changed from one-page reloading to another.
So I have tried to use _webDriver.FIndElement(By.Xpath(//*[contains(#id,'foo')]));
But the element cannot be found. While this (find an element by XPath) can work on one object, it won't work for a different one. very frustrating.
It might be that either I'm doing something wrong or the object rendered is problematic.
This is an example for an HTML object :
<input id="grid#28.115#7,1#if" ct="I" lsdata="{0:'grid\x2328.115\x237,1\x23if',2:'100000008',4:10,8:true,9:true,13:'100\x25',14:'FORCEDLEFT',17:true,18:true,19:true,20:'0',25:true,41:false,44:{MaxInputLen:'10'}}" lsevents="{FieldHelpPress:[{ClientAction:'none'},{modalNo:'0',rgv:[{id:'28.115',submit:'X',type:'GuiGridView'}]}]}" type="text" maxlength="10" tabindex="-1" ti="-1" class="lsTblEdf3 lsTblEdf3NoEllipsis urBorderBox lsControl--explicitwidth lsField__input" readonly="true" value="100000008" autocomplete="off" autocorrect="off" name="grid#28.115#7,1#if" style="vertical-align:top;text-align:left;" title="">
That's the way I'm trying to find it , but being failed:
var element = wait.Until(x => x.FindElement(By.XPath("//input[contains(#id,'115#7,1')]")));
You may use ExpectedConditions. See code below.
var elem = wait.Until(ExpectedConditions.ElementExists(By.CssSelector("input[id*='115#7,1']")));
I want to click on link after navigating to a website
webKitBrowser1.Navigate("http://www.somesite.com");
How to click on a link on this website assuming that the link's id is lnkId ?
Go to Google
In the default browser control that comes with Visual Studio, I can do that using the code below :
foreach (HtmlElement el in webBrowser1.Document.GetElementTagName("a")) {
if (el.GetAttribute("id") == "lnkId") {
el.InvokeMember("click");
}
}
What is the equivalent of the code above when I'm using WebkitDotNet control?
As the WebKit doesn't provide a Click() event (see here for details), you cannot do that in the above way. But a small trick may work as an equivalent of the original winforms way as below:
foreach (Node el in webKitBrowser1.Document.GetElementsByTagName("a"))
{
if (((Element) el).GetAttribute("id") == "lnkId")
{
string urlString = ((Element) el).Attributes["href"].NodeValue;
webKitBrowser1.Navigate(urlString);
}
}
Here what I am doing is casting the WebKit.DOM.Node object to its subclass WebKit.DOM.Element to get its Attributes. Then providing href to the NamedNodeMap, i.e. Attributes as the NodeName, you can easily extract the NodeValue, which is the target url in this case. You can then simply invoke the Navigate(urlString) method on the WebKitBrowser instance to replicate the click event.
I don't work with Windows and all my experience is on Webkit GTK. Following comments are based on that experience.
I am not sure which webkit .NET version you are using. Looks like there are multiple implementations. Assuming you are using the one mentioned by Wasif, you can evaluate javascript as mentioned in the example https://code.google.com/p/open-webkit-sharp/source/browse/JavaScriptExample/Form1.cs.
Actually if implementation is supporting javascript execution then you can do most, if not all the DOM operations. The API functions are usually same as javascript functions and most of the time call exact same functions internally despite of origination. Communication between your application and javascript can be little challenging, but if you can read alert messages, that also can be solved. It looks like this library does support alert handling mechanism. A tool I wrote at https://github.com/nhrdl/notesMD will show some examples of achieving this communication though it uses GTK version and is written in python.
Incidentally if you know the id of the element, then Document.GetElementById will save you the loop.
webKitBrowser1.StringByEvaluatingJavaScriptFromString("var inpt = document.createElement(\"input\"); inpt.setAttribute(\"type\", \"submit\"); inpt.setAttribute(\"id\", \"nut\"); inpt.setAttribute(\"type\", \"submit\"); inpt.setAttribute(\"name\", \"tmp\"); inpt.setAttribute(\"value\", \"tmp\"); var element = document.getElementById(\"lnk\"); element.appendChild(inpt);");
webKitBrowser1.StringByEvaluatingJavaScriptFromString("document.getElementById('nut').click();");
I am automating a task using webbrowser control , the site display pages using frames.
My issue is i get to a point , where i can see the webpage loaded properly on the webbrowser control ,but when it gets into the code and i see the html i see nothing.
I have seen other examples here too , but all of those do no return all the browser html.
What i get by using this:
HtmlWindow frame = webBrowser1.Document.Window.Frames[1];
string str = frame.Document.Body.OuterHtml;
Is just :
The main frame tag with attributes like SRC tag etc, is there any way how to handle this?Because as i can see the webpage completely loaded why do i not see the html?AS when i do that on the internet explorer i do see the pages source once loaded why not here?
ADDITIONAL INFO
There are two frames on the page :
i use this to as above:
HtmlWindow frame = webBrowser1.Document.Window.Frames[0];
string str = frame.Document.Body.OuterHtml;
And i get the correct HTMl for the first frame but for the second one i only see:
<FRAMESET frameSpacing=1 border=1 borderColor=#ffffff frameBorder=0 rows=29,*><FRAME title="Edit Search" marginHeight=0 src="http://web2.westlaw.com/result/dctopnavigation.aspx?rs=WLW12.01&ss=CXT&cnt=DOC&fcl=True&cfid=1&method=TNC&service=Search&fn=_top&sskey=CLID_SSSA49266105122&db=AK-CS&fmqv=s&srch=TRUE&origin=Search&vr=2.0&cxt=RL&rlt=CLID_QRYRLT803076105122&query=%22LAND+USE%22&mt=Westlaw&rlti=1&n=1&rp=%2fsearch%2fdefault.wl&rltdb=CLID_DB72585895122&eq=search&scxt=WL&sv=Split" frameBorder=0 name=TopNav marginWidth=0 scrolling=no><FRAME title="Main Document" marginHeight=0 src="http://web2.westlaw.com/result/dccontent.aspx?rs=WLW12.01&ss=CXT&cnt=DOC&fcl=True&cfid=1&method=TNC&service=Search&fn=_top&sskey=CLID_SSSA49266105122&db=AK-CS&fmqv=s&srch=TRUE&origin=Search&vr=2.0&cxt=RL&rlt=CLID_QRYRLT803076105122&query=%22LAND+USE%22&mt=Westlaw&rlti=1&n=1&rp=%2fsearch%2fdefault.wl&rltdb=CLID_DB72585895122&eq=search&scxt=WL&sv=Split" frameBorder=0 borderColor=#ffffff name=content marginWidth=0><NOFRAMES></NOFRAMES></FRAMESET>
UPDATE
The two url of the frames are as follows :
Frame1 whose html i see
http://web2.westlaw.com/nav/NavBar.aspx?RS=WLW12.01&VR=2.0&SV=Split&FN=_top&MT=Westlaw&MST=
Frame2 whose html i do not see:
http://web2.westlaw.com/result/result.aspx?RP=/Search/default.wl&action=Search&CFID=1&DB=AK%2DCS&EQ=search&fmqv=s&Method=TNC&origin=Search&Query=%22LAND+USE%22&RLT=CLID%5FQRYRLT302424536122&RLTDB=CLID%5FDB6558157526122&Service=Search&SRCH=TRUE&SSKey=CLID%5FSSSA648523536122&RS=WLW12.01&VR=2.0&SV=Split&FN=_top&MT=Westlaw&MST=
And the properties of the second frame whose html i do not get are in the picture below:
Thank you
I paid for the solution of the question above and it works 100 %.
What i did was use this function below and it returned me the count to the tag i was seeking which i could not find :S.. Use this to call the function listed below:
FillFrame(webBrowser1.Document.Window.Frames);
private void FillFrame(HtmlWindowCollection hwc)
{
if (hwc == null) return;
foreach (HtmlWindow hw in hwc)
{
HtmlElement getSpanid = hw.Document.GetElementById("mDisplayCiteList_ctl00_mResultCountLabel");
if (getSpanid != null)
{
doccount = getSpanid.InnerText.Replace("Documents", "").Replace("Document", "").Trim();
break;
}
if (hw.Frames.Count > 0) FillFrame(hw.Frames);
}
}
Hope it helps people .
Thank you
For taking html you have to do it that way:
WebClient client = new WebClient();
string html = client.DownloadString(#"http://stackoverflow.com");
That's an example of course, you can change the address.
By the way, you need using System.Net;
This works just fine...gets BODY element with all inner elements:
Somewhere in your Form code:
wb.Url = new Uri("http://stackoverflow.com");
wb.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(wbDocumentCompleted);
And here is wbDocumentCompleted:
void wb1DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e)
{
var yourBodyHtml = wb.Document.Body.OuterHtml;
}
wb is System.Windows.Forms.WebBrowser
UPDATE:
The same as for the document, I think that your second frame is not loaded at the time you check for it's content...You can try solutions from this link. You will have to wait for your frames to be loaded in order to see its content.
The most likely reason is that frame index 0 has the same domain name as the main/parent page, while the frame index 1 has a different domain name. Am I correct?
This creates a cross-frame security issue, and the WB control just leaves you high and dry and doesn't tell you what on earth went wrong, and just leaves your objects, properties and data empty (will say "No Variables" in the watch window when you try to expand the object).
The only thing you can access in this situation is pretty much the URL and iFrame properties, but nothing inside the iFrame.
Of course, there are ways to overcome teh cross-frame security issues - but they are not built into the WebBrowser control, and they are external solutions, depending on which WB control you are using (as in, .NET version or pre .NET version).
Let me know if I have correctly identified your problem, and if so, if you would like me to tell you about the solution tailored to your setup & instance of the WB control.
UPDATE: I have noticed that you're doing a .getElementByTagName("HTML")(0).outerHTML to get the HTML, all you need to do is call this on the document object, or the .body object and that should do it. MyDoc.Body.innerHTML should get the the content you want. Also, notice that there are additional iFrames inside these documents, in case that is of relevance. Can you give us the main document URL that has these two URL's in it so we / I can replicate what you're doing here? Also, not sure why you are using DomElement but you should just cast it to the native object it wants to be cast to, either a IHTMLDocument2 or the object you see in the watch window, which I think is IHTMLFrameElement (if i recall correctly, but you will know what i mean once you see it). If you are trying to use an XML object, this could be the reason why you aren't able to get the HTML content, change the object declaration and casting if there is one, and give it a go & let us know :). Now I'm curious too :).