Is it possible to compress javascript files.... or anything related to a web page before sending it to the client?
i am using telerik controls, and found that their controls write a lot of extra javascript code that makes the page size huge (something around 500KB).
If you are using IIS7, it has support for compression built in. Highlight the web application folder (or even the website) in the treeview of IIS manager, in the IIS panel in the next pane select Compression, then in the right hand pane select Open Feature. You then have two checkboxes to enable compression on static and dynamic content.
Be aware though that this may not be the silver bullet - it will increase load on the server, and it will increase the load on the client as the browser as it unzips the content. 500KB is a moderate sized page, but it isn't big. Compression like this is usually only beneficial if it is the network pipe that is the problem, which it seldom is these days. Your issue may be more to do with lots of javascript running during the onload of the page - if you see a reasonable difference in speed between IE7 and IE8 this may be an indication of this problem.
You can combine and minify your *.js and *.css files with http://github.com/jetheredge/SquishIt/
But I don't know if it can help you to compress telerik's scripts.
GZIP, Minification & Packing provided you had access to the .js files. You can do this one-off or programatically before sending it to the client.
Check this out.
http://www.julienlecomte.net/blog/2007/08/13/
Related
We are using Html Agility Pack to scrape data for HTML-based site; is there any DLL like Html Agility Pack to scrape flash-based site?
It really depends on the site you are trying to scrap. There are two types of sites in this regard:
If the site has the data inside the swf file, then you'll have to decompile the swf file, and read the data inside. with enough work you can probably do it programmatically. However if this is the case, it might be easier to just gather the data manually, since it's probably isn't going to change much.
If most cases however, especially with sites that have a lot of data, the flash file is actually contacting an external API. In that case you can simply ignore the flash altogether and get to the API directly. If your not sure, just activate Firebug's net panel, and start browsing. If it's using an external api it should become obvious.
Once you find that API, you could probably reverse engineer how to manipulate it to give you whatever data you need.
Also note that if it's a big enough site, there are probably non-flash ways to get to the same data:
It might have a mobile site (with no flash) - try accessing the site with an iPhone user-agent.
It might have a site for crawlers (like googlebot) - try accessing the site with a googlebot user-agent.
EDIT:
if your talking about crawling (crawling means getting data from any random site) rather then scraping (Getting structured data from a specific site), then there's not much you can do, even googlebot isn't scrapping flash content. Mostly because unlike HTML, flash doesn't have a standardized syntax that you can immediately tell what is text, what is a link etc...
You won't have much luck with the HTML Agility Pack. One method would be to use something like FiddlerCore to proxy HTTP requests to/from a Flash site. You would start the FiddlerCore proxy, then use something like the C# WebBrowser to go to the URL you want to scrape. As the page loads, all those HTTP requests will get proxied and you can inspect their contents. However, you wouldn't get most text since that's often static within the Flash. Instead, you'd get mostly larger content (videos, audio, and maybe images) that are usually stored separately. This will be slowed compared to more traditional scraping/crawling because you'll actually have to execute/run the page in the browser.
If you're familiar with all of those YouTube Downloader type of extensions, they work on this same principal except that they intercept HTTP requests directly from FireFox (for example) rather than a separate proxy.
I believe that Google and some of the big search engines have a special arrangement with Adobe/Flash and are provided with some software that lets their search engine crawlers see more of the text and things that Google relies on. Same goes for PDF content. I don't know if any of this software is publicly available.
Scraping Flash content would be quite involved, and the reliability of any component that claims to do so is questionable at best. However, if you wish to "crawl" or follow hyperlinks in a Flash animation on some web page, you might have some luck with Infant. Infant is a free Java library for web crawling, and offers limited / best-effort Flash content hyperlink following abilities. Infant is not open source, but is free for personal and commercial use. No registration required!
How about capturing the whole page as an image and running an OCR on the page to read the data
The Plan
I need to upload a file without causing a full page refresh.
The uploaded files should be stored in a temporary place (session or
cookies).
I will only save the files in the server, if the user
sucessfully fills all the form fields.
Note: This is one of the slides of a jQuery slider. So a full refresh would ruin the user experience.
The Problem
If I place a Fileuploader Control inside a AJAX Update Panel, I wont be able to acess the file on the server side.
Note:From what I have found so far, this happens due to safety reasons.
Can't be done without co-operating binaries being installed on the
client. There is no safe mechanism for an AJAX framework to read the
contents of a file and therefore be able to send it to the server. The
browser supports that only as a multipart form post from a file input
box.
The Questions
When storing the files in a temporary location, should I use session or cookies? (what if the user has cookies disabled?)
If preventing a postback, really is against the standarts of user safety. Will it harm my website reputation? (regarding SEO and such)
Which road to take?
C# ASP.Net with AJAX? (is there a workarround?)
C# ASP.Net + AJAX Control Toolkit? Does it helps? (using the AsyncFileUpload control)
C# ASP.Net + jQuery Control? (won't I have problems fetching the data from the JavaScript?)
C# ASP.Net + iFrame? (not the most elegant solution)
The total amount of cookies that you can use is limited to a few kilobytes, so that's not a viable option to store a file. So sessions would be the only remaining. Consider also to save the file in the file system and remove it if it's not going to be used, as storing files in memory (session) will limit how many users you can handle at once.
No, for functions like uploading files you don't have to worry about that. Search engines doesn't try to use such functions when scanning the page.
You can use an AJAX upload in browsers that support direct file access, but there is no way around doing a post if you need to support all browsers. However, the post doesn't have to end up loading a new page, you can put a form in an iframe, or point the target of a form to an iframe.
I should display a fairly large amount of data in a GridView (around 1000 rows per 10-20 columns), and I see that the first rendering is extremely slow in IE8 (also with compatibility mode enabled). The same page loads very fast on Firefox and Chrome, but unfortunately I have to target IE for this project.
What can I do to improve IE's behavior?
Already you know that for large data source the rendering will be slow :)
You can try answers here on this post
Why do my ASP.NET pages render slowly when placed on the server?
On this page Look at this answer link https://stackoverflow.com/a/730732/448407
But prior to this all, Why don't you use paging in the gridview?
This will allow the page to open as the data to render will be less but this will not be a performance boost at the database level.
For that you need custom paging :
http://www.aspsnippets.com/Articles/Custom-Paging-in-ASP.Net-GridView-using-SQL-Server-Stored-Procedure.aspx
Are you using javascript to render the page? Or the whole HTML is coming from the server?
If Javascript, then you need to switch to server side rendering. Maybe use DataGrid on the server.
If you have large amount of CSS, especially CSS classes defined as .parentClass .childCass {....} then it performs worse in IE.
Another possibility is your page is downloading a lot of script, css, images. IE is usually slower than FF, Chrome is fetching lot of external resources.
So, suggestion would be to:
Render the HTML directly from server.
Set EnableViewstate = false on the DataGrid.
Cleanup CSS.
Reduce number of scripts, css and images.
Let me know if it helps. If not, please prove the html output from your page.
I have a asp.net/c# web app. When do user leave a certain page, I would like to delete 1 particular temp file on the client machine, in the temp file folder. Can I do this at all? Can I do this server side or client side?
Thanks.
You can't delete a file from the end users machine - without using something like an ActiveX. That would tie your users to Internet Explorer though.
A better solution might be to set the applicable caching directives so that the browser doesn't store the file in its cache, that way it won't actually be written to disk (I'm assuming here that the file is one that is being pulled down by the browser as part of viewing/loading the page).
For example:
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.Cache.SetExpires(DateTime.Now.Subtract(new TimeSpan(1, 0, 0, 0)));
Response.Cache.SetNoStore();
If you really wanted to do this, and it wasn't as simple as preventing a file from being cached then as I said, using an ActiveX would be pretty much the only option. If you were going to develop an ActiveX control to do this, I'd strongly recommend you review MSDNs documentation on Per-Site ActiveX Controls. Deploying an ActiveX control, even within an intranet, that allowed files to be deleted from the end users PC from any domain could only be considered reckless at best, negligent at worst.
For security reasons, this is completely impossible.
(Unless you're asking to delete your own cookie)
If you don't want the browser to cache your files, you can use the HTTP caching headers.
I would like to open multiple download dialog boxes after the user clicks on a link.
Essentially what I am trying to do is allow the user to download multiple files. I don't want to zip up the files and deliver one zipped file because that would require a lot of server resources given that some of the files are some what large.
My guess is that there may be some way with javascript to kick off multiple requests when the user clicks on a certain link. Or maybe there might be a way on the server side to start off another request.
Unless the client is configured to automatically download files, you can't accomplish this without packaging the files in a single response (like ZIP solution you mentioned.) This would be a security issue if a Web site would be able to put arbitrarily large number of files on your disk without telling you.
By the way, you might be overestimating the cost of packaging in a single file. Streaming files is usually an I/O-bound operation. There should be enough CPU cycles to spare for piping the data through some storage(tar)/compression(zip) methods.
If you absolutely, positively cannot zip at the server level, this would probably be a good instance for creating some sort of custom "download manager" client-side plugin that you would have the user install and then you could have complete control over how many files you downloaded, where they went, etc.
I suppose you could link to a frameset document or a document containing iframes. Set the src of each from to one of the files you want to download.
That said, a zipped version would be better. If you are concerned about the load then either:
zip the files with compression set to none
use caching on the server so you zip each group of files only once
Present a page with a form of check boxes of the available files for download - with multiple select enabled for the check boxes.
User selects multiple files and submits forms.
Server accepts request and creates a page with serial-triggered file download javascript.
The page with the embedded javascript is presented to the user's browser, listing and asking for confirmation the files to be serially downloaded.
User clicks [yes - serially swamp my harddisk with these files] button.
foreach file, listener for download completed triggers the next download, until end of list.
I only know how to do this using Google GWT, where I had set up GWT RPC between browser and server. Took me two weeks to understand GWT RPC and perfect the download. Now it seems rather simple.
Basically (do you know basically is one of the most used non-technical words among the geek community?), you have to declare a server service class specifying the datatype/class of transfer. Where the datatype must implement serializable. Then on the browser-side the GWT client declares a corresponding receiver class specifying the same serializable datatype. The browser side implements a listener for onSuccess and onFailure.
Hey, I even managed to augment GWT service base class so that I could use JSP rather than plain servlets to implement the service interface.
Actually, I was not downloading a series of files but streams that conditionally serially triggered the next stream, because my onSuccess routine would inspect the current stream to decide what content to request for on the next stream.
Ok, two weeks was an exageration, it took me a week to do it. A genius would have taken half a day only.
I don't see what the big deal is with this. Why not something like this:
Click me
<script type="text/javascript">
$('a#myLink').click(function() {
window.open('http://www.mysite.com/file1.pdf', 'file1');
window.open('http://www.mysite.com/file2.pdf', 'file2');
window.open('http://www.mysite.com/file3.pdf', 'file3');
});
</script>