I'm currently trying to create a small scale CMS for my personal website and thought I'd like to try to make some sort of a page layout from a basic aspx file with some placeholders and load content based on the URL, without the use of url query strings such as ?pageid=1.
I'm trying to wrap my head around how this can be achieved without getting errors of a physical file not existing when I e.g. type in http://mywebsite.com/projects/w8apps/clock.
I've read a lot about BLOB and storing files binarily in the database. But I haven't come across a blog which points in the direction of using a so called page layout and loading content based on the URL instead of a query string.
I'm not asking for a solution, just some hints - blogs mostly - which can point me in the right direction and help me achieve this goal.
To deal with loading a page with a URL that is more friendly, rather than ?page_id=1, you may want to have a look at this article about URL Rewriting and URL Mapping.
http://www.codeproject.com/Articles/18318/URL-Mapping-URL-Rewriting-Search-Engine-Friendly-U
Hope you can find a way of fitting this kind of code into your application!
You questions is too broad but here are couple hints that will point you in the right direction.
Create clear specs before you start working on this. Do you really need to have URLs like this http://mywebsite.com/projects/w8apps/clock ? If yes then check out MVC since it has best support for this
Storing binary files in database doesn’t have much to do with this. You first need to think of how your tables will look like and that is based on what are you trying to achieve…
I’d suggest you install some CRM that if open source and analyze this first. You’ll probably find a lot better ideas this way. Just go to CodePlex and search for CMS.
Related
I need to get information from couple of web sites . For example this site
What would be the best way to get all the links from the page so that the information could be extracted.
Some times need to click on a link to get other links inside that.
I tried Watin and I tried doing the same from within Excel 2007 with Web Data option.
Could you please suggest some better way which I am not aware of .
Ncrawler might be very useful for the deep level crawling . You could also set the MaxCrawlDepth for specifying the same.
Have a look at WGet. It is an incredibly powerful tool for mining the content of a single page or an entire website. The options available allow you to dictate how many levels deep to follow in terms of links, what to do with static resources such as images, how to handle relative links, etc. It also does a very good job of mining pages which are generated dynamically, such as those served by CGI or ASP.
It's been around for many years in the 'nix world but executables compiled for Windows are readily available.
You would need to kick it off from .NET using Process.Start but you could then pipe the results into multiple files (which mimic the original website structure), a single file, or into memory by capturing standard output. Then you can do subsequent analysis such as extracting HREF HTML elements (if it is only links you are interested in) or grabbing the sort of table data evident in the link you provide in your question.
I realise this is not a 'pure' .NET solution but the power WGET offers more than compensates for this, in my opinion. I have used it myself in the past, in this way, for exactly the sort of thing I think you are trying to do.
I recommend to use http://watin.org/. This is much simpler than wget :-)
Background info: 2 semesters of C#(WinForms), plenty of HTML/CSS skill, brand new to asp.net.
I'm building a site for a friend who's a photographer. It's just a gallery site, but he'd like to be able to update the galleries himself and he's not tech savvy in the least. So I'm using the following approach to the problem:
Using ASP.NET 4 WebForms:
I'm using System.IO to get the names of the folders which represent the "Galleries" and populating a TreeView control for navigation.
When a "Gallery" is selected, I have code that builds a (HTML)list of the image files and populates an UpdatePanel with this list.
As this is all based on the folders/files, I'm building him an secure admin page to upload files to new or existing galleries(folders). He'll also be able to edit(move/delete) the existing files from there.
I got it all to work, which was a nice little victory, but I'm realizing this approach is not optimal, as none of the unique galleries are findable via search engine or even URL; the SEO value is nill; the browser back/forward buttons are useless...
Can you guys/gals recommend a better way to go about this?
Is there a way to modify what I've already done to optimize the project?
I'll gladly start over to do this right.
Thanks
Couple of suggestions, if you are doing this for fun - and want to learn something, consider using ASP.Net MVC instead. Both will work, but doing it with MVC will give you more up-to-date and marketable skills.
Second, unless you really want to write the whole thing from scratch, consider using a package to do most of what you want and then customize it.
Something like this would work quite well: http://www.galleryserverpro.com/ and is open source, free/cheap and well supported.
SInce you are new to asp.net, you can learn a lot by picking thru the open source code and seeing how other people with more experience have already solved the very same issues.
When a "Gallery" is selected, I have
code that builds a (HTML)list of the
image files and populates an
UpdatePanel with this list.
Well, most of your problem is sitting inside this sentence. get rid of the UpdatePanel. When you are making ajax request, you are not able to allow browser history. so SEO, back/forward nav. buttons are always issue with updatepanel.
http://ajaxhistory.com/
First of all, I hope my question doesn't bother you. I really need to get and idea of how I can accomplish that, but unfortunatelly, I'm really a beginner, I'm crawling when it comes to programming. I'm struggling to learn it the best way I can. I'll thank you for any help you give me.
Here's the task: I was ordered to find a way to collect some data from a website using a c# application. This will be done everyday, in order to update the data which we'll use to calculate some financial index.
I know my question might sound vague, anyway, even telling me how I can be more precise will help me. I know I seem to know desperate, but putting appart all the personell issues, my scholarship kind of depends on it.
Thanks in advance! (Please, don't mind the bad English, I'm brasilian and my English might not be that good yet.)
First, your English is fine. In fact, I thought you were a native speaker until you said otherwise.
The term you're looking for is 'site scraping'. Observe this question: Options for HTML scraping?. The second answer points to an HTML agility pack library you can use.
Now, there are two possibilities here. The first is you have to parse the HTML and scrape your data out of it. This is more computationally intensive and depends on the layout of the page. If they change the way the site looks, it could break the scraper.
The second possibility is they provide some XML or JSON web service you can consume. In this case you aren't scraping anything, but are rather using a true data feed. If the layout of the site changes, you will not break. Whether your target site supports this form of data feed is up to the site.
If I understand your question, you're being asked to do some Web Scraping, where you 1) download the contents of a web page and 2) try to parse data from that content.
For step #1, you should look into using a WebClient object in C# to download the HTML from the web page. You can give a WebClient object the URL you want to download the content from and obtain a String containing the content (probably HTML) of the URL.
How you go about doing step #2 depends on what content is present at the web site. If you know of certain patterns you're looking for in the HTML, you can search the HTML string using various methods. A more general solution for parsing HTML data can be found through using the Html Agility Pack, which will let you handle the HTML as a tree structure (DOM).
Use the WebClient class to get the page.
Turn the html into xml.
Use XPath to select the data you are interested in.
Ok, this is a pretty straightforward app design, and a lot of the code exists that you can reuse. Since you're a beginner, I'll break down into steps of what you need to do and recommend approaches.
1) You will use classes from System.Net to pull the web pages (WebClient being the easiest to usse). You will want to have this part of the program run on a timer if you can (using the scheduled jobs feature of the OS) and have it just pull the pages and drop them in a folder.
2) You have a second job which will run separately, pulling unread files from that folder, parsing them (using the HtmlAgility pack library is best) and then storing them in an index of some kind (Lucene is best for that)
3) You have a front end application of some sort (web or desktop) which queries that index for the information you're looking for.
Wondering if anyone knows of any open source code about contextualization via JS (javascript) or ASP.NET ? That is, contextualization of content - determining "what" content is?
Its an interesting area and I cant seem to find any previous projects on it ?
Really appreciate any help ?
Presumably you are looking to build something like a search engine that can find a relevant document in a sea of nondescript documents which do not contain any metadata, only their textual content.
Computers are notoriously bad at this kind of categorization, for the same reasons that they can identify spelling, but not grammar errors. It's a pattern matching problem that relies on human context to determine the correct solution.
Google is good at this because it relies on human behaviors to create relevance (like how many links from other sites a page has).
The closest thing I can think of that will do what you want (without actually attaching genuine metadata to each document by hand) is full text search. The Wikipedia article has several references to software that does this.
Depending on what you want to do, it may be easier to mine your page for context after the conent has been rendered. That way you are ensured that you have the context that the user is viewing the page. Here is a post to a jQuery plugin that highlights target words on a html page.
Here are some other plugins you might want to review:
quickSearch plugin
QuickSilver Search plugin
Is there anything I can do while coding in Asp.net to make my website come on top in search engines for general keywords? (For example : cars...assuming that my site is wwww.joshautos123.com)
Thanks
This has nothing to do with ASP.NET Josh,
You need to start investigating SEO in general (Search Engine Optimization)
This is a pretty broad topic (more info here) covering everything from keywords, content, url formatting, and cross linking to lots of different sites/resources.
The best thing you can do if your only developing it (and not responsible for marketting)
is that you put together a well designed, clean, standards compliant site.
you can follow the search engine optimization guidlines for your header, images etc.
that the way you can acheive it.
Anyways, you can read this tutorial, it will help you.
for more, you can create a good master page with good meta tages. it will help you...
This sounds a bit "black hat" to me.
The most important aspect of SEO is content, So as long as you have lots of good, meaningful content relating to "cars" you stand a good chance.
Next up, you want lots of reliable sources to link to you with meaningful keywords in the links. And a whole bunch more SEO tricks
When making the ASP.NET application, you can make sure you facilitate SEO best practices, but at the end of the day, if your site is full of nonsens and people don't think it's worth linking to. Your'e not going to get very good rankings.
Most of the SEO enhancements can be done beside of asp.net.
so just add the right tags and words to the aspx file.
or for dynamic pages, generate them from the content.
but what you should have also in mind are 2 things:
first of all, run all of your system under one url. and all other ones should be redirected with 301 to it (e.g. redirect http://joshautos123.com to www.joshautos123.com with 301 redirect)
and do not use subdomains for different parts (that will split your search rank)
and second, you should use the asp.net mvc.
because one important part of seo is, that the right content (e.g. titles, real car names, etc) are in the URL.
rest was already said by the other answers....
Try visiting this website for a start Using Meta Tags with Master Pages in ASP.NET