Reading hidden web site textboxes with C#? - c#

First of all, I am pretty much still a beginner, especially when it comes to web stuff.
I am trying to read the content of a text box from a web page that is open in a browser with my winforms application and I am not able to modify the source code of the web page itself. Sadly, the string I am looking for is not simply written in the source code of the page. So I can't just read the page source and parse it. It seems as if the content of the textbox is populated via javascript.
I am generally speaking not sure where to even start here. Any suggestions are very welcome.
Also, I am not sure what other information I should put here. I don't have an idea where to start, so I don't have any code yet to show.
Edit:
I have been trying to use the agility pack, but I am still not sure how to get to what I need. Here is my code so far:
WebClient client = new WebClient();
String html = client.DownloadString(URL);
HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
doc.LoadHtml(html);
foreach (HtmlNode link in doc.DocumentNode.SelectNodes("//div[#class='ember-view']"))
{
HtmlAttribute div = link.Attributes["div"];
if (div != null)
{
outputBox.Text += div.Value;
}
}
When I run the code, I get this:
An unhandled exception of type 'System.NullReferenceException' occurred.
Additional information: Object reference not set to an instance of an object.
When I go to the web page and do Inspect Element I get this (I only copied a few lines):
<html class="no-js" lang="en">
<head></head>
<body class="ember-application" lang="en-US" data-environment="production">
<div id="booting" style="display: none;"></div>
<div id="ember2493" class="ember-view">
<div id="alert" class="ember-view"></div>
I am not sure how to get to, let's say, the inner code of this line:
<div id="alert" class="ember-view"></div>
Also, my apologies if this is something obvious that I am missing, but again, this is all new for me. Thanks for the help so far.

Do you know Html Agility Pack? I always using agility pack for html crawling.
HtmlDocument doc = new HtmlDocument();
doc.Load("file.htm");
foreach(HtmlNode link in doc.DocumentElement.SelectNodes("//a[#href"])
{
HtmlAttribute att = link["href"];
att.Value = FixLink(att);
}
doc.Save("file.htm");

Perhaps something along the following lines may help ?
var inputs = webBrowser1.Document.GetElementsByTagName("input");
foreach (HtmlElement input in inputs)
{
var id = input.Id;
var name = input.Name;
var val = input.OuterHtml; // can parse value from here
}

Related

HTML Agility Pack Node Selection

I'm brand new to HTML Agility Pack (as well as network-based programming in general). I am trying to extract a specific line of HTML, but I don't know enough about HTML Agility Pack's syntax to understand what I'm not writing correctly (and am lost in their documentation). URLs here are modified.
string html;
using (WebClient client = new WebClient())
{
html = client.DownloadString("https://google.com/");
}
HtmlDocument doc = new HtmlDocument();
doc.LoadHtml(html);
foreach (HtmlNode img in doc.DocumentNode.SelectNodes("//div[#class='ngg-gallery-thumbnail-box']//div[#class='ngg-gallery-thumbnail']//a"))
{
Debug.Log(img.GetAttributeValue("href", null));
}
return null;
This is what the HTML looks like
<div id="ngg-image-3" class="ngg-gallery-thumbnail-box" >
<div class="ngg-gallery-thumbnail">
<a href="https://urlhere.png"
// More code here
</a>
</div>
</div>
The problem occurs on the foreach line. I've tried matching examples online the best I can but am missing it. TIA.
HTMLAgilityPack uses XPath syntax to query nodes - HAP effectively converts the HTML document into an XML document. So the trick is learning about XPATH querying so you can get the right combinations of tags and attributes to get the result you need.
The HTML snippet you pasted isn't well formed (there's no closing >on the anchor tag. Assuming that it is closed, then
//div[#class='ngg-gallery-thumbnail-box']//div[#class='ngg-gallery-thumbnail']//a[#href]
will return an XPathNodeList of only those tags that have href attributes.
If there are none that meet your criteria, nothing will be written.
For debugging purposes, perhaps log less specific query node count or OuterXml to see what you're getting e.g.
Debug.Log(doc.DocumentNode.SelectNodes("//div[#class='ngg-gallery-thumbnail-box']//div[#class='ngg-gallery-thumbnail'])[0].OuterXml)

Getting nodes from html page using HtmlAgilityPack

My program collects info about Steam users' profiles (such as games, badges and etc.). I use HtmlAgilityPack to collect data from html page and so far it worked for me just good.
The problem is that on some pages it works well, but on some - returns null nodes or throws an exception
object reference not set to an instance of an object
Here's an example.
This part works well (when I'm getting badges):
WebClient client = new WebClient();
string html = client.DownloadString("http://steamcommunity.com/profiles/*id*/badges/");
var doc = new HtmlDocument();
doc.LoadHtml(html);
HtmlNodeCollection div = doc.DocumentNode.SelectNodes("//div[#class=\"badge_row is_link\"]");
This returns the exact amout of badges and then I can do whatever I want with them.
But in this one I do the exact same thing (but getting games), and somehow it keeps throwing me and error I mentioned above:
WebClient client = new WebClient();
string html = client.DownloadString("http://steamcommunity.com/profiles/*id*/games/?tab=all");
var doc = new HtmlDocument();
doc.LoadHtml(html);
HtmlNodeCollection div = doc.DocumentNode.SelectNodes("//*[#id='game_33120']");
I know that there is the node on the page (checked via google chrome code view) and I don't know why in 1st case it works, but in the 2nd it doesn't.
When you right-click on the page and choose View Source do you still see an element with id='game_33120'? My guess is you won't. My guess is that the page is being built dynamically, client-side. Therefore, the HTML that comes down in the request doesn't contain the element you're looking for. Instead that element appears once the Javascript code has run in the browser.
It appears that the original request will have a section of Javascript that contains a variable called rgGames which is a Javascript array of the games that will be rendered on the screen. You should be able to extract the information from that.
I dont understand the selectNodes method with this parameter "//*[#id='game_33120']", maybe is this your fault, but you can check this:
The real link of an steamprofil with batches etc is:
http://steamcommunity.com/id/id/badges/
and not
http://steamcommunity.com/profiles/id/badges/
after I visited an badges page, the url stay in the browser, at the games link, they redirect you to
http:// steamcommunity. com
Maybe this can help you

How to get an element using c#

I'm new with C#, and I'm trying to access an element from a website using webBrowser. I wondered how can I get the "Developers" string from the site:
<div id="title" style="display: block;">
<b>Title:</b> **Developers**
</div>
I tried to use webBrowser1.Document.GetElementById("title") ,but I have no idea how to keep going from here.
Thanks :)
You can download the source code using WebClient class
then look within the file for the <b>Title:</b>**Developers**</div> and then omit everything beside the "Developers".
HtmlAgilityPack and CsQuery is the way many people has taken to work with HTML page in .NET, I'd recommend them too.
But in case your task is limited to this simple requirement, and you have a <div> markup that is valid XHTML (like the markup sample you posted), then you can treat it as an XML. Means you can use .NET native API such as XDocument or XmlDocument to parse the HTML and perform an XPath query to get specific part from it, for example :
var xml = #"<div id=""title"" style=""display: block;""> <b>Title:</b> Developers</div>";
//or according to your code snippet, you may be able to do as follow :
//var xml = webBrowser1.Document.GetElementById("title").OuterHtml;
var doc = new XmlDocument();
doc.LoadXml(xml);
var text = doc.DocumentElement.SelectSingleNode("//div/b/following-sibling::text()");
Console.WriteLine(text.InnerText);
//above prints " Developers"
Above XPath select text node ("Developers") next to <b> node.
You can use HtmlAgilityPack (As mentioned by Giannis http://htmlagilitypack.codeplex.com/). Using a web browser control is too much for this task:
HtmlAgilityPack.HtmlWeb web = new HtmlWeb();
HtmlAgilityPack.HtmlDocument doc = web.Load("http://www.google.com");
var el = doc.GetElementbyId("title");
string s = el.InnerHtml; // get the : <b>Title:</b> **Developers**
I haven't tried this code but it should be very close to working.
There must be an InnerText in HtmlAgilityPack as well, allowing you to do this:
string s = el.InnerText; // get the : Title: **Developers**
You can also remove the Title: by removing the appropriate node:
el.SelectSingleNode("//b").Remove();
string s = el.InnerText; // get the : **Developers**
If for some reason you want to stick to the web browser control, I think you can do this:
var el = webBrowser1.Document.GetElementById("title");
string s = el.InnerText; // get the : Title: **Developers**
UPDATE
Note that the //b above is XPath syntax which may be interesting for you to learn:
http://www.w3schools.com/XPath/xpath_syntax.asp
http://www.freeformatter.com/xpath-tester.html

Pull timer value from a webpage using xPath and C#

I am trying to pull some timer values off of websites using the xpath in the HtmlAgilityPack. However, when I am using the xpath, I get null reference exceptions because a particular node does not exist when I am grabbing it. To test why this was, I used a doc.Save to check the nodes myself, and I found that the nodes truly do not exist. From my understanding, HtmlAgilityPack should download the webpage almost exactly how I see it, with all the data in there as well. However, most of the data in fact is missing.
How exactly am I supposed to grab the timer values, or even an event title from either of the following websites:
http://dulfy.net/2014/04/23/event-timer/
http://guildwarstemple.com/dragontimer/eventsb.php?serverKey=108&langKey=1
My current code to pull just the title of the event from the first timebox from guildwarstemple is:
public void updateEventData()
{
//string Url = "http://dulfy.net/2014/04/23/event-timer/";
string Url = "http://guildwarstemple.com/dragontimer/eventsb.php?serverKey=108&langKey=1";
HtmlWeb web = new HtmlWeb();
HtmlDocument doc = web.Load(Url);
doc.Save("c:/doc.html");
Title = doc.DocumentNode.SelectNodes("//*[#id='ep1']/p")[0].InnerText;
//*[#id="scheduleList"]/div[3]
//*[#id="scheduleList"]/div[3]/div[3]/text()
}
You XPath expression fails because there is only one div with #id='ep1' in the document, and it has no p inside:
<div id="ep1" class="eventTimeBox"></div>
In fact, all the divs in megaContainer are empty in the link you are trying to load with your code.
If you think there should be p elements in there, it's probably being added dynamically via JavaScript, so it might not be available when you are scraping the site with a C# client.
In fact, there are some JavaScript variables:
<script>
...
var e7 = 'ep1';
...
var e7t = '57600';
...
Maybe you want to get that data. This:
substring-before(substring-after(normalize-space(//script[contains(.,"var e7t")]),"var e7t = '"),"'")
selects the <script> which contains var e7t and extracts the string in the apostrophes. It will return:
57600
The same with your other link. The expression:
//*[#id="scheduleList"]
is a an empty div. You can't navigate further inside it:
<div id="scheduleList" style="width: 720px; min-width: 720px; background: #1a1717; color: #656565;"></div>
But this time there seems to be no nested JavaScript that refers to it in the page.

c# find image in html and download them

i want download all images stored in html(web page) , i dont know how much image will be download , and i don`t want use "HTML AGILITY PACK"
i search in google but all site make me more confused ,
i tried regex but only one result ... ,
People are giving you the right answer - you can't be picky and lazy, too. ;-)
If you use a half-baked solution, you'll deal with a lot of edge cases. Here's a working sample that gets all links in an HTML document using HTML Agility Pack (it's included in the HTML Agility Pack download).
And here's a blog post that shows how to grab all images in an HTML document with HTML Agility Pack and LINQ
// Bing Image Result for Cat, First Page
string url = "http://www.bing.com/images/search?q=cat&go=&form=QB&qs=n";
// For speed of dev, I use a WebClient
WebClient client = new WebClient();
string html = client.DownloadString(url);
// Load the Html into the agility pack
HtmlDocument doc = new HtmlDocument();
doc.LoadHtml(html);
// Now, using LINQ to get all Images
List<HtmlNode> imageNodes = null;
imageNodes = (from HtmlNode node in doc.DocumentNode.SelectNodes("//img")
where node.Name == "img"
&& node.Attributes["class"] != null
&& node.Attributes["class"].Value.StartsWith("img_")
select node).ToList();
foreach(HtmlNode node in imageNodes)
{
Console.WriteLine(node.Attributes["src"].Value);
}
First of all I just can't leave this phrase alone:
images stored in html
That phrase is probably a big part of the reason your question was down-voted twice. Images are not stored in html. Html pages have references to images that web browsers download separately.
This means you need to do this in three steps: first download the html, then find the image references inside the html, and finally use those references to download the images themselves.
To accomplish this, look at the System.Net.WebClient() class. It has a .DownloadString() method you can use to get the html. Then you need to find all the <img /> tags. You're own your own here, but it's straightforward enough. Finally, you use WebClient's .DownloadData() or DownloadFile() methods to retrieve the images.
You can use a WebBrowser control and extract the HTML from that e.g.
System.Windows.Forms.WebBrowser objWebBrowser = new System.Windows.Forms.WebBrowser();
objWebBrowser.Navigate(new Uri("your url of html document"));
System.Windows.Forms.HtmlDocument objDoc = objWebBrowser.Document;
System.Windows.Forms.HtmlElementCollection aColl = objDoc.All.GetElementsByName("IMG");
...
or directly invoke the IHTMLDocument family of COM interfaces
In general terms
You need to fetch the html page
Search for img tags and extract the src="..." portion out of them
Keep a list of all these extracted image urls.
Download them one by one.
Maybe this question about C# HTML parser will help you a little bit more.

Categories