get query results from a web site in c# - c#

I am using c#. I have imei number of a phone. Need to get details of the phone from http://www.imei.info web site in my c# application.
When I go to the web site and search the imei number of my phone; I see the following URL http://www.imei.info/?imei=356061042215493 with my phone details.
How can I do this in my c# application?

You can concatenate the URL on the run-time and then download the HTML page, parse it and extract the information you want using HTMLAgilityPack. See code below as an example and then you can parse returned data to extract your information.
private List<HtmlNode> GetPageData(string imei)
{
HtmlDocument doc = new HtmlDocument();
WebClient webClient = new WebClient();
string strPage = webClient.DownloadString(
string.Format("{0}{1}", WebPage, imei));
doc.LoadHtml(strPage);
//Change parsing schema down here
return doc.DocumentNode.SelectNodes("//table[#class='sortable autostripe']//tbody//tr//td").ToList();
}

unless they have an API, you're going to need to read the page details using xml parser like: LINQ to XML or XmlReader

See WebClient.DownloadString and HtmlAgilityPack

Related

How to extract JSON embedded on a HTML page using C#

The JSON I wish to use is embedded on a HTML page. Within a tag on the page there is a statement:
<script>
jsonRAW = {... heaps of JSON... }
Is there a parser to extract this from HTML? I have looked at json.NET but it requires its JSON reasonably formatted.
You can try to use HTML Agility pack. This can be downloaded as a Nuget Package.
After installing, this is a tutorial on how to use HTML Agility pack.
The link has more info but it works like this in code:
var urlLink = "http://www.google.com/jsonPage"; // 1. Specify url where the json is to read.
var web = new HtmlWeb(); // Init the HTMl Web
var doc = web.Load (urlLink); // Load our url
if (doc.ParseErrors != null) { // Check for any errors and deal with it.
}
doc.DocumentNode.SelectSingleNode(""); // Access the dom.
There are other things in between but this should get you started.

I can't get the content of a web page without html codes in C#

I want to get the text of a web page in windows form application. I am using:
WebClient client = new WebClient();
string downloadString = client.DownloadString(link);
However, it gave me html codes of the web page.
Here is the question:
Can I get the specific part of a website? For example a part that has a class name "ask-page new-topbar". I want to get every part that has class name "ask-page new-topbar".
No, you can't get only parts of a website, when you send a request to a url.
What you can do is use the Html Agility Pack and let it dig through the Html code to give you the contents of the requested node.

C# loading html of a webpage currently on

I am trying to make a small app that can log in automatically on a website, get certain texts on the website and return to user.
To show what I have, I did below to make it log in,
System.Windows.Forms.HtmlDocument doc = logger.Document as System.Windows.Forms.HtmlDocument;
try
{
doc.GetElementById("loginUsername").SetAttribute("value", "myusername");
doc.GetElementById("loginPassword").SetAttribute("value", "mypassword");
doc.GetElementById("loginSubmit").InvokeMember("click");
And below to load html of the page
WebClient myClient = new WebClient();
Stream response = myClient.OpenRead(webbrowser.Url);
StreamReader reader = new StreamReader(response);
string src = reader.ReadToEnd(); // finally reading html and saving in variable
Now, it successfully loaded html but html of the page where it's not logged in. Is there a way to refer to current html somehow? Or another way to achieve my goals. Thank you for reading!
Use the Webclient class so you can use sessions and cookies.
check this Q&A: Using WebClient or WebRequest to login to a website and access data
Why don't you make REST API calls and send the data like username and password from your code itself?
Is there any Web API for the URL ? If yes , you can simply call the service and pass on the required parameters. The API shall return in JSON/XML which you can parse and extract information

Parse webpage with Fragment identifier in URL, using HTML Agility Pack

I want to parse webpage with Fragment identifier(#), f.e. http://steamcommunity.com/market/search?q=appid%3A570+uncommon#p4
When i use my browser(Google Chrome), i have different result, for different identifier(#p1,#p2,#p3), but when i use HTML Agility Pack, i always get first page, despite of page identifier.
string sURL = "http://steamcommunity.com/market/search?q=appid%3A570+uncommon#p"
wClient = new WebClient();
html = new HtmlAgilityPack.HtmlDocument();
html.LoadHtml(wClient.DownloadString(sURL+i));
I understand, that something like Ajax used here and in fact exist only one page. How can i fix my problem, and get results from other pages using C#?
Like David said,
use URL : http://steamcommunity.com/market/search/render/?query=appid%3A570%20uncommon&search_descriptions=0&start=30&count=10
where start is the start number and count is the number of items you want.
the result is a json result, so for stating the obvious you only want to use results_html
side note: in your chrome browser (when pressed F12) click on network tab and you will see the request and result being made

Extract contents from HTTP request and then get selected contents from it

just for learning purpose, i ma playing with page Request and response.I need to know how can i achieve this.What i want to do is to make a HTTP request from windows application and extract some content from it. For example
I am calling http://stackoverflow.com/questions
now from response i want to extract all question nodes which is in <div id="questions"> and format that and then display this in Table. Can some body guide me how to do that. I here that i can do that formating and extracting thingy from regular expression too but i m not sure how.
Thanks in advance
Lura
I suggest using the HTML Agility Pack - it will allow you to get the page directly and query it using XPath, similar to how XmlDocument works.
You can use HttpWebRequest to get Source content of page the as follows.
string url = #"http://stackoverflow.com/users";
System.Net.WebRequest request = System.Net.HttpWebRequest.Create(url);
System.Net.HttpWebResponse response = (System.Net.HttpWebResponse)request.GetResponse();
System.IO.StreamReader stream = new System.IO.StreamReader
(response.GetResponseStream(), System.Text.Encoding.GetEncoding("utf-8"));
XmlDocument rssDoc = new XmlDocument();
rssDoc.Load(stream);

Categories