I have written a few programs over the last few months that load HTML pages into a string and does various things like extract bits and pieces. I was basically writing my own GUI for some websites which have no API.
I've done this by stringing together many String.Substring(), String.IndexOf(), and String.LastIndexOf() statements.
I realise this is probably not the best way to do it - I was just writing a few "quick-and-dirty" trials to begin with.
What is the proper way to extract tokens from a web page?
Thanks :)
For XHTML, load it into XmlDocument or XDoxument.
For (non-X)HTML, load it into the HTML Agility Pack's HtmlDocument - the API is almost the same as XmlDocument, so it should be familiar.
Use Html Agility Pack
Related
We are moving an e-commerce website to a new platform and because all of their pages are static html and they do not have all their product information in a database, we must scrape their current website for the product descriptions.
Here is one of the pages: http://www.cabinplace.com/accrugsbathblackbear.htm
What is the best was to get the description into a string? Should I use html agility pack? and if so how would this be done? as I am new to html agility pack and xhtml in general.
Thanks
The HTML Agility Pack is a good library to use for this kind of work.
You did not indicate if all of the content is structured this way nor if you have already gotten the kind of fragment you posted from the HTML files, so it is difficult to advise further.
In general, if all pages are structured similarly, I would use an XPath expression to extract the paragraph and pick the innerHtml or innerText from each page.
Something like the following:
var description = htmlDoc.SelectNodes("p[#class='content_txt']")[0].innerText;
Also,
If you need a good tool for testing or finding the Xpath for the HAP you can use this one:
HTML-Agility-xpath-finder. It is made using the same library so if you find a xpath in this tool you be securely able to use in your code.
This is just a general question. Currently I am doing webpage scraping using regex. But I think it is sometimes too difficult to figure out the regular expression, so I am thinking is XSL/XPath an alternative to regex in C#?
Also, I would like to know if there are more advanced techniques for webpage scraping other than the two listed above. Thanks.
You may take a look at SgmlReader or Html Agility Pack which are HTML parsing libraries for .NET.
Easy way to gather data from a web page is WebsiteParser. It's based on Html Agility Pack and you can simply describe your properties using attributes and CSS selectors.
Github here
I am using HttpWebRequest to put a remote web page into a String and I want to make a list of all it's script tags (and their contents) for parsing.
What is the best method to do this?
The best method is to use an HTML parser such as the HTML Agilty Pack.
From the site:
It is a .NET code library that allows you to parse "out of the web" HTML files. The parser is very tolerant with "real world" malformed HTML. The object model is very similar to what proposes System.Xml, but for HTML documents (or streams).
Sample applications:
Page fixing or generation. You can fix a page the way you want, modify the DOM, add nodes, copy nodes, well... you name it.
Web scanners. You can easily get to img/src or a/hrefs with a bunch XPATH queries.
Web scrapers. You can easily scrap any existing web page into an RSS feed for example, with just an XSLT file serving as the binding. An example of this is provided.
Use an XML parser to get all the script tags with their content.
Like this one: simple xml
Basically I want to extract keywords or words or tokens that are present in the webpage after removing the stopwords. Does anybody know how to do this? Code in C# would be appreciated.
Use an HTML parsing library like the HTML Agility Pack.
Once you load an HTML document with it, you can query it with Xpath syntax - it exposes the HTML in a similar way to an XmlDocument.
The HTML Agility Pack that Oded mentions will help you get at the plain text inside the HTML, but to extract keywords from the webpage after removing the stopwords you'll need to do more work. There's a good informative answer from Joseph Turian to this question: How do I extract keywords used in text?
I want to store Google search results (both title and link) into database. HTML code of search results is like:
<br/>
THETITLE
And each page has 10 results. Can anyone show me how to retrieve THEURL and THETITLE?
Thank you so much!
You should to give Html Agility Pack a try. An HTML parser is correct way to read HTML content, not regular expressions.
BUT, If you wanna try for your own risk:
<h3 class=r><a .*? href="(?<url>[^"]*)".*?>(?<title>.*?)</a></h3>
You'll have problems with:
Line breaks
Unmatched tags
Minor HTML changes
So, good luck!
For starters, I would not recommend using regex for this, use the 'Html Agility Pack' to do the parsing of the HTML document.
Hope this helps,
Best regards,
Tom.
Consider using the Google AJAX Search API instead. It will be easier on both you and Google's servers. There are some instructions for using it outside JavaScript environments. They don't give a C# example, but it shouldn't be difficult to adapt to your needs using one of the JSON APIs for C#.
If you do stick with HTML, I also recommend HTML Agility Pack.
You should also think about caching so you minimize both stale data and unnecessary requests.