Is it possible to add a url from another domain to the bundling in Microsoft.Web.Optimization?
I want to add a reference to replace the following link:
<link href='http://fonts.googleapis.com/css?family=PT+Sans+Narrow:400,700' rel='stylesheet' type='text/css'>
My code for creating a css bundle which works with local files is as below:
Bundle cssCommon = new Bundle("~/cssCommon", typeof(CssMinify));
cssCommon.AddDirectory("~/content/", "aom.common.*", false);
BundleTable.Bundles.Add(cssCommon);
Actually leaving that content on the other host is probably a better thing to do.
You benefit from the Google's global distribution potentially this means that the font may be 'closer' on the internet and have less lag than your actual web server.
You can download the content in parallel with your main content (each host is a separate set of downloads threads so it won't block your existing content or add to the total transfer time of that content)
You benefit from Google's server resilience and up time.
The end user real world experience may actually be better as a result.
That doesn't really make sense because bundling reduces the number of requests from the website's server. Referencing goodleapis is obviously on another server.
There is the concept of CDNs though, and these can be changed depending on the release, this link has some useful information on using a CDN with Bundling and Minification.
Related
Is there any library in Xamarin which would store the pages that we browse in WKWebView?
Have to store the resources of the pages (CSS, fonts, js etc.) for offline viewing. The complexity is maintaining the folder structure and manage the resource Urls within the CSS and JS files. Any idea how the resources can be stored and loaded?
There are resources on how to save a html page and load the html in WKWebView.
Please note that this question is not about that. It is more about storing and managing the resources of the visited pages for offline viewing.
I don't think you're going to get your answer with a mobile only approach. It's not impossible to create one but I don't believe anything exists that will do what you want, happy to be proven wrong. I think you need to think outside the square a bit.
I can't give you the entire set of code because I don't own it (my company does) but I managed to take a website completely offline (with limitations of course) by using multiple resources to achieve the desired outcome.
I used a piece of software called Cyotek WebCopy in an Azure VM to scrape all of the website down to a folder. That folder was then zipped up and uploaded to Azure Blob Storage so it could be accessed from anywhere. The Xamarin app would then access the storage container, retrieve all of the blobs and then when a user clicks on a specific blob, it unzips down to the device and then opens up in a web view for the user to browse.
All of this was achieved using a web service and PowerShell scripts on the VM side and then of course your standard Xamarin based application for viewing.
Like I said, there are limitations to this but barring external links and database calls (like a submission page), it will work for you. It has worked for us.
It may sound like a lot of work but all in all, the VM side took me about 2 days and the Xamarin concept about 5 so all in all, not long to stand something up that is able to be built upon. I hope that helps.
I have seen on various websites how developers version their css/javascripts files by specifying querystrings similar to:
<head>
<link rel="stylesheet" href="css/style.css?v=1">
<script src="js/helper.js?v=1">
</head>
How is that done? Is it a good practice? I've been searching around but apparently, I'm not looking for the right terms. If it matters, I'm using ASP.NET.
Edit:: I just noticed (via Firebug) that if I "version" my files (?v=1) they will always be loading and will always override the cache. Is there a way around that?
Thanks in advance.
They're not really versioned. We do that because certain browsers won't always request the stylesheets properly (they won't even check for a last modified) so to force them to make a new request, you can bump the number in your html file that references it. It's kind of a hack, but it works.
This helps with caching when you want it to and forcing to download when you don't. Files are cached based on their path. So if the path is the same then it can pull from cache. But if they are different, hence a new version, then it would not use the cache but should pull the new file. At least that is how I have used this.
They are doing this to make caching for the Browsers more reliable. You can add the version manually, and increment it every time you change the file. This way the Browser thinks it's got a new file and downloads it for sure.
I don't know the way how to do this automatically in ASP.NET, Ruby on Rails for example checks the last changed timestamp on the file and adds this as version number to the file. I'm sure you'll be able to do something similar in ASP.NET.
Is it possible to compress javascript files.... or anything related to a web page before sending it to the client?
i am using telerik controls, and found that their controls write a lot of extra javascript code that makes the page size huge (something around 500KB).
If you are using IIS7, it has support for compression built in. Highlight the web application folder (or even the website) in the treeview of IIS manager, in the IIS panel in the next pane select Compression, then in the right hand pane select Open Feature. You then have two checkboxes to enable compression on static and dynamic content.
Be aware though that this may not be the silver bullet - it will increase load on the server, and it will increase the load on the client as the browser as it unzips the content. 500KB is a moderate sized page, but it isn't big. Compression like this is usually only beneficial if it is the network pipe that is the problem, which it seldom is these days. Your issue may be more to do with lots of javascript running during the onload of the page - if you see a reasonable difference in speed between IE7 and IE8 this may be an indication of this problem.
You can combine and minify your *.js and *.css files with http://github.com/jetheredge/SquishIt/
But I don't know if it can help you to compress telerik's scripts.
GZIP, Minification & Packing provided you had access to the .js files. You can do this one-off or programatically before sending it to the client.
Check this out.
http://www.julienlecomte.net/blog/2007/08/13/
I would like to open multiple download dialog boxes after the user clicks on a link.
Essentially what I am trying to do is allow the user to download multiple files. I don't want to zip up the files and deliver one zipped file because that would require a lot of server resources given that some of the files are some what large.
My guess is that there may be some way with javascript to kick off multiple requests when the user clicks on a certain link. Or maybe there might be a way on the server side to start off another request.
Unless the client is configured to automatically download files, you can't accomplish this without packaging the files in a single response (like ZIP solution you mentioned.) This would be a security issue if a Web site would be able to put arbitrarily large number of files on your disk without telling you.
By the way, you might be overestimating the cost of packaging in a single file. Streaming files is usually an I/O-bound operation. There should be enough CPU cycles to spare for piping the data through some storage(tar)/compression(zip) methods.
If you absolutely, positively cannot zip at the server level, this would probably be a good instance for creating some sort of custom "download manager" client-side plugin that you would have the user install and then you could have complete control over how many files you downloaded, where they went, etc.
I suppose you could link to a frameset document or a document containing iframes. Set the src of each from to one of the files you want to download.
That said, a zipped version would be better. If you are concerned about the load then either:
zip the files with compression set to none
use caching on the server so you zip each group of files only once
Present a page with a form of check boxes of the available files for download - with multiple select enabled for the check boxes.
User selects multiple files and submits forms.
Server accepts request and creates a page with serial-triggered file download javascript.
The page with the embedded javascript is presented to the user's browser, listing and asking for confirmation the files to be serially downloaded.
User clicks [yes - serially swamp my harddisk with these files] button.
foreach file, listener for download completed triggers the next download, until end of list.
I only know how to do this using Google GWT, where I had set up GWT RPC between browser and server. Took me two weeks to understand GWT RPC and perfect the download. Now it seems rather simple.
Basically (do you know basically is one of the most used non-technical words among the geek community?), you have to declare a server service class specifying the datatype/class of transfer. Where the datatype must implement serializable. Then on the browser-side the GWT client declares a corresponding receiver class specifying the same serializable datatype. The browser side implements a listener for onSuccess and onFailure.
Hey, I even managed to augment GWT service base class so that I could use JSP rather than plain servlets to implement the service interface.
Actually, I was not downloading a series of files but streams that conditionally serially triggered the next stream, because my onSuccess routine would inspect the current stream to decide what content to request for on the next stream.
Ok, two weeks was an exageration, it took me a week to do it. A genius would have taken half a day only.
I don't see what the big deal is with this. Why not something like this:
Click me
<script type="text/javascript">
$('a#myLink').click(function() {
window.open('http://www.mysite.com/file1.pdf', 'file1');
window.open('http://www.mysite.com/file2.pdf', 'file2');
window.open('http://www.mysite.com/file3.pdf', 'file3');
});
</script>
One of my friends is working on having a good solution to generate aspx pages, out of html pages generated from a legacy asp application.
The idea is to run the legacy app, capture html output, clean the html using some tool (say HtmlTidy) and parse it/transform it to aspx, (using Xslt or a custom tool) so that existing html elements, divs, images, styles etc gets converted neatly to an aspx page (too much ;) ).
Any existing tools/scripts/utilities to do the same?
Here's what you do.
Define what the legacy app is supposed to do. Write down the scenarios of getting pages, posting forms, navigating, etc.
Write unit test-like scripts for the various scenarios.
Use the Python HTTP client library to exercise the legacy app in your various scripts.
If your scripts work, you (a) actually understand the legacy app, (b) can make it do the various things it's supposed to do, and (c) you can reliably capture the HTML response pages.
Update your scripts to capture the HTML responses.
You have the pages. Now you can think about what you need for your ASPX pages.
Edit the HTML by hand to make it into ASPX.
Write something that uses Beautiful Soup to massage the HTML into a form suitable for ASPX. This might be some replacement of text or tags with <asp:... tags.
Create some other, more useful data structure out of the HTML -- one that reflects the structure and meaning of the pages, not just the HTML tags. Generate the ASPX pages from that more useful structure.
Just found HTML agility pack to be useful enough, as they understand C# better than python.
I know this is an old question, but in a similar situation (50k+ legacy ASP pages that need to display in a .NET framework), I did the following.
Created a rewrite engine (HttpModule) which catches all incoming requests and looks for anything that is from the old site.
(in a separate class - keep things organized!) use WebClient or HttpRequest, etc to open a connection to the old server and download the rendered HTML.
Use the HTML agility toolkit (very slick) to extract the content that I'm interested in - in our case, this is always inside if a div with the class "bdy".
Throw this into a cache - a SQL table in this example.
Each hit checks the cache and either a)retrieves the page and builds the cache entry, or b) just gets the page from the cache.
An aspx page built specifically for displaying legacy content receives the rewrite request and displays the relevant content from the legacy page inside of an asp literal control.
The cache is there for performance - since the first request for a given page has a minimum of two hits - one from the browser to the new server, one from the new server to the old server - I store cachable data on the new server so that subsequent requests don't have to go back to the old server. We also cache images, css, scripts, etc.
It gets messy when you have to handle forms, cookies, etc, but these can all be stored in your cache and passed through to the old server with each request if necessary. I also store content expiration dates and other headers that I get back from the legacy server and am sure to pass those back to the browser when rendering the cached page. Just remember to take as content-agnostic an approach as possible. You're effectively building an in-page web proxy that lets IIS render old ASP the way it wants, and manipulating the output.
Works very well - I have all of the old pages working seamlessly within our ASP.NET app. This saved us a solid year of development time that would have been required if we had to touch every legacy asp page.
Good luck!