I can not see whole html source code by pressing ctrl+U - c#

I am working on scrapping of website. so i make one desktop application for that.
I check website using inspect element then i can see whole data of website but when i try to check website data using page source(ctrl+U) then there is nothing.
means i can't find any website data in page source but can see in firebug(inspect element).
because of this when i am trying to get data using c# coding then i am getting only page source data which doesn't contains any website data only contains schema(structure) and js links.
see below image of firebug.
And this is page source image.

You met the js-powered site. The content is dynamicly loaded thru js, thus it's not visible in page-source. Turn to the scrape libraries that support js code evaluation. See here an example.

Related

C# Get HTML generated by Javascript in a .Net Console Application

I have to write a Console Application that grap and parse data from a website.
Unluckly, the website uses some kind of Javascript framework to compose the page.
So what I need to do is get the HTML once time the page is rendered by Javascript.
This is just the first step, my second step is to navigate the website to collect data from different page but... Unluckly the pages that I have to parse does not have Urls, but they are loaded from Javascript too...
Do you have some ideas ?
Thanks to support
Dario

how to show only part of other website in my site using c#

I am using maps from maps.nyc.gov. What i want to do is to show only map from this website in my own website.
Let say here is the sample URL:
http://maps.nyc.gov/doitt/nycitymap/?searchType=AddressSearch&addressNumber=498%20&street=7%20Avenue&borough=Manhattan
I only want to show map from this site in my website don't know how to do it.
I used iframe but it loads complete website.
We can also use maps from this site: Sample link:
http://www.oasisnyc.net/map.aspx?zoomto=lot:4004310027
Please Guide.
Thanks
Possible solution:
make an ajax call (of other technique), and only take part of the content you have retrieved on your site.
if only the map, then i think you need div tag mainCenter
Or ask if the have an API (like google maps)
i did this using CSS.
Made an iframe and loaded that site in iframe and wrapped the iframe with a div.
Set the size of the div to the size of part of website you want to show.

How to extract the extra content loaded in a web page

How to extract the extra content loaded in a web page, which will not be visible in view page source. The extra content is being loaded using ajax. This data can be seen under NET tab using firebug. How to extract this data using c# code.
Two ways :
1- You can use webbrowser to load the same page and get the active document.
2- You can replicate the ajax call made, and use that to get the extra bits that are appended to the document.
And reading your linkedin example above:
When you select the checkbox a ajax call is made , which brings back results and populates the table.You can see that call using firebug console window and see the post parameter and replicate them to get the same result.
Depends on your application in the first place, if you are using c# application as the client for reading a web page, then the the ajax content may not be visible until you put in a javascript engine.
if you are serving the said pages, you only have to log the request response of the server.
More specific question would be appreciated
That extra content is dynamically generated by ajax (for eg: Gridview is generated as table), it is stored in browser's memory. and can be viewed by client side debugging tools (IE has developer tools option).
Once you do a post back, all the control's values are available for C#.
If you are saying extra content, can you please clarify what exactly you are trying to extract using c#?

How can certain things be omitted from the page source

I am writing a program that automatically start up a web browser to a song of choice, so in order to do this my program is using an httwebrequest/response to
get the source code of a webpage that contains a link to the audio source page
search through that website source code and find the audio source website
open up chrome the the specified webpage, so i can listen to the song.
I am using project.com as my audio source, and I do not plan on using another site such as youtube
The problem I am having, is that, while I can see a link to the audio source website when i load the initial website in chrome, the page source does not contain it. For instance, this website which
has a link to the audio source http://pl.st/s/1709472017 where you can actually see on the initial website, but when I look at the page source using chrome, then I cannot find this audio source link.
If I right click on the audio source textbox and select inspect element, then I can see this:
<input class="copy-song-link"
type="textbox"
value="http://pl.st/s/1709472017"
name="url"
onclick="javascript:select();" title="Copy and share this song URL">
so this link is clearly located somewhere. My questions are these
why/how is this link not in the source page, but i can see when i look at the website through chrome
how come chrome's "inspect element" can find this url, while the page source does not include it.
How does the inspect element differ from looking at the source code?
I am pretty new to http communication so any help would be appreciated.
I generally use the plugin Firebug for Firefox for situations like this. It will allow you to use the "Net" tab to inspect all subsequent requests (often Ajax) that occur while the page is loading.
In your case it appears an Ajax request was collecting the data that is being used to generate the link that you want to pull out. This data appears in JSON and JavaScript is likely generating the links on the client side from the JSON. Take a look at this link
http://www3.playlist.com/async/searchbeta/tracks?searchfor=r%20u%20mine
The linkid used to generate the link is in the first part of the response ...PPL.search.trackdata = [{"linkid":1709472017...

C# AJAX or Java response HTML scraping

Is there a way in C# to get the output of AJAX or Java? What I'm trying to do is grab the specifics of items on a webpage, however the webpage does not load it into the original source. Does anybody have a good tutorial or a good place to start?
For example, I would want to get all the car listings from http://www.madisonhonda.com/Preowned-Inventory.aspx#layout=layout1
If the DOM is being modified by javascript through ajax calls, and this modified data is what you are trying to capture then using a standard .NET WebClient won't work. You need to use a WebBrowser control so that it will actually execute the script, otherwise you will just be downloading the source.
If you need to just "load" it, then you'll need to understand how the page functions and try making the AJAX call yourself. Firebug and other similar tools allow you to see what requests are made by the browser.
There is no reason you cannot make the same web request from C# that the original page is making from Javascript. Depending on the architecture of the website, this could range in difficulty from constructing the proper URL with query string arguments (easy) to simulating a post with lots of page state (hard). The response content would most likely then be XML or JSON content instead of the HTML DOM, which if you're scraping for data will be a plus.
A long time ago I wrote a VB app to screen scrape financial sites and made it so that you could fire up multiple of these "harvester" screen scrapers. That might ease the time period loading data. We could do thousands of scrapes a day with multiple of these running on multiple boxes. Each harvester got its marching orders from information stored in the database, like what customer to get next and what was needed to scrape (balances, transaction history, etc.).
Like Michael said above, make a simple WinForms app with a WebBrowser control in it. You have to trap the DocumentComplete event. That should only fire when the web page is completely loaded. Then check out this post which gives an overview of how to do it.
Use the Html Agility Pack. It allows download of .html and scraping via XPath.
See How to use HTML Agility pack

Categories