Clearing caches across browsers not working when hosting in iis [duplicate] - c#

I want my JavaScript that's referenced through <script> tags, my css, and various graphic files to be cached on the user browser for a given release of my site. However, when I release an update to a file, I would like to ensure that the new content (Javascript, css, graphics, etc) will be updated on the user's machine.
In researching this problem, I've come across a number of possible solutions:
Adjust http headers for things like cache control and expires
Append a unique querystring to each resource request
Include a version number in the path (or filename) of the resource being requested
My concern with option 1 (other than not knowing how to implement it for some content types) is that if there's an intermediate proxy between the browser and IIS, then the intermediate proxy may not respect the http headers, and the content would be cached between the browser and the proxy.
My concern with option 2 is two fold. For one, if I just use a random number or timestamp, then the browser will request a new resource every time, bypassing the local cache even when the content hasn't changed (when not between releases). A workaround to this problem is to use the timestamp of the resource file or a hash of the file; this would change only when a new release occurs. However, this lead to my second concern: I understand that some web proxies never cache anything with a query string, by default (http://stackoverflow.com/questions/5541340/how-can-i-prevent-javascript-caching-querystring-approach-isnt-working). While this is likely customizable, I wouldn't want to rely on the administrator changing a setting from the default, in order to get the performance benefit of caching.
Option 3 seems like the best choice, but I don't know how to implement it in a practical manner, for ASP .NET MVC (currently using ASP .NET MVC 3). By hand, I could go into every file that links to a graphic, css or external javascript file and change the path to include a different version number for each release, but clearly this is tedious and error prone.
My questions then are:
1) Are there any other strategies to avoid caching (for a given release) that I should consider and
2) assuming I use option 3 (including a version number in the path of the resource), how I can I achieve this in an ASP .NET MVC application, in a way that is realistic to maintain?
Thanks,
Notre

The idea is whenever the version of the js, css files changes you have to change the URL from the server so the browser request for the new version.
Please refer the answer of #Kip in this thread that clearly describes about this. Though the answer is in php it can be implemented in asp.net mvc itself.
If you are using the new bundling/minification feature of ASP.NET MVC 4 I think the versioning be taken care automatically by it(not sure though).

Http caching is done in 2 ways. By validation mechanism and by expiration mechanism.
For Expiration mechanism you could use expires on header(but this is good for static resource which you know would never change).
For validation mechanism and conditional get ETags are used which seems to be a perfect case for your problem.

Related

Log All Requests for User Per Webpage

I want to build an "audit trail" for all requests incoming to the server, however it needs to be specific per user, per web page.
For instance I imagine something like this:
On initial view render I would store (cookie/ page variable/ something else) a unique Id saying the user browsed to /myapp.com/dashboard/1234. - maybe in the layout.cshtml.
Then the app fires off X number of GET/ POST requests to the server each having that same unique Id initially tied to the view rendered.
This allows me then to tie back all requests for a page and add up the server execution time.
I tried using path specific cookies but this won't work I realized since a user can have many tabs open with the same url. Also the user works in many areas of the app at once. They can have anywhere from 1 to 10+ tabs open. Each of these should have it's own unique Id and "audit trail" of all calls taking place on that page.
This is an existing app so modifying each of the GET/ POST to pass in the unique Id is out of scope. Just hoping I am missing something that might take care of this.
Thank you!
If I'm understanding you correctly, you have a single page load, and then additional requests made either for images and other resources or AJAX requests that you want tied to and tracked along with that initial page load.
The chief problem you're going to have here is that, based on the way HTTP works, each request is handled as its own thing and not considered as part of a greater whole. The web browser makes it all look seamless, but all the web server is doing is just responding to a bunch of (as far as it knows) unrelated requests for various different things. To track them all as one unit, you would either need to attach some unique id to the request itself (for a GET, that would be either as part of the URI path or query string) or lean on Session to introduce state between the requests. However, session state really only works in this scenario when all requests can be tied to a single initial request. Once the user starts working with multiple different pages at once, there's no reasonable to discern which request belongs to what, and you're back in the same boat.
In other words, your only real option is to send something along with the request, which would mean doing something like:
<link rel="stylesheet" type="text/css" href="/path/to/file.css?origin=#Request.RawUrl" />
Then, you could have an action filter that looks for origin in the query string of any request, and ties it to the logging for that particular page.
For what it's worth, it should be noted that by default, IIS will handle all requests for static resources directly, without involving ASP.NET. If you do want to track requests for static resources, you would have to pass them all through ASP.NET, which will be kind of a pain. If you only want to track AJAX requests, that's much simpler and shouldn't require anything special for the most part.
All that said, if the only purpose of this is to track page load time, there's far better and easier ways to do that. You can install Glimpse. You can use your browser's developer console. You can use something like Google Analytics. All of these are far preferable to the path you're going down here, for page load statistics.
Write an ActionFilter to do this. There are many examples of this
http://rion.io/2013/04/15/creating-advanced-audit-trails-using-actionfilters-in-asp-net-mvc/
http://blog.ploeh.dk/2014/06/13/passive-attributes/
I personally like Mark Seemann's example more since it clearly defines a nice separation of concerns for the attribute and the filter.

MVC Get Vs Post

While going through MVC concepts, i have read that it is not a good practice to have code inside 'GET' action which changes state of server objects( DB updates etc.,).
'Caching of return data' has been given as a reason for this.
Could someone please explain this?
Thanks in advance!
This is by HTTP standard. The GET verb is one that should be idempotent and safe.
9.1.1 Safe Methods
Implementors should be aware that the software represents the user in
their interactions over the Internet, and should be careful to allow
the user to be aware of any actions they might take which may have an
unexpected significance to themselves or others.
In particular, the convention has been established that the GET and
HEAD methods SHOULD NOT have the significance of taking an action
other than retrieval. These methods ought to be considered "safe".
This allows user agents to represent other methods, such as POST, PUT
and DELETE, in a special way, so that the user is made aware of the
fact that a possibly unsafe action is being requested.
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects, so
therefore cannot be held accountable for them.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
Browsers can cache GET requests, generally on static data, like images or scripts. But you can also allow browsers to cache GET requests to controller actions as well, using [OutputCache] or other similar ways, so if caching is turned on for a GET controller action, it's possible that clicking on a link leading to /Home/Index doesn't actually run the Index method on the server, but rather allows the browser to serve up the page from its own cache.
With this line of thinking, you can safely turn on caching on GET actions in which the data you're serving up doesn't change (or doesn't change often), with the knowledge that your server action won't fire every time.
POSTs won't be cached by the browser, so any POST is guaranteed to make it to the server.
Ignore caching for a moment. Another way of thinking about this is that search engines will store HTTP GET links during their indexing/crawling process, therefore they will show up in search results.
Suppose if your /Home/Index is implemented as GET but it lets say deletes a row in your Database, every time this link shows up on a search engine and somebody clicks it, you will have a delete row, and soon you have a lot deleted rows.
The HTTP spec states that GET and HEAD are expected to be idempotent, ie. they should not change server state.
One practical aspect of this, is that search robots will issue GET against any link to your site they know of. If such a GET changes user data it was not meant to change, you are in trouble.
Being idempotent has the added benefit that clients could be able to cache the result of a GET (use HTTP headers to control this).

How does WebResources.axd or ScriptResources.axd actually work?

Where can I learn how WebResources.axd or ScriptResources.axd actually works?
What is the string that is appended to the .axd? Does this string change, or is it constant? Is it page, session specific? Can these files be cached on a proxy?
How does it work internally? This is especially important after the ASP.NET vulnerability was discovered... as other people may want to not implement similar coding errors.
My understanding is that an encrypted key is used to direct how they operate.. (machine key) but I don't know much more.
You might want to check out the answers to this other question on StackOverflow: ScriptResource.axd d query string parameter.
It seems like these are just static javascript resources, where the query string is a hash identifying the DLL version they're housed in.
To see if the content changes at all for different pages & requests to the same iis application, you could use any number of tools -- e.g. Firebug's net panel in Firefox -- to view the http request & response bodies, then diff them with e.g. WinMerge to see if the content is changing.

ASP.NET URL remapping &redirection - Best Practice needed

This is the scenario: I have a list of about 5000 URLs which have already been published to various customers. Now, all of these URLs' location has changed on my server side. The server is still the same though. This is a ASP.NET website with .NET3.5/C#.
My requirement is : Though the customers use the older source URL they should be redirected to the new URL without any perceived change or intermediate redirection message etc.
I am trying to make sense of the whole scenario:
Where would I put the actual mapping of Old URL to New URL -- in a database or some config. file or is there a better option?
How would I actual implement a redirect:
Should I write a method with Server.Transfer ot Response.Redirect?
And is there a best practice to it like - placing the actual re-routing in HTTPModules..or is it Application_BeginRequest?
I am looking to achieve with a best-practice compliant methodology and very low performance degradation, if any.
If your application already uses a database then I'd use that. Make the old URL the primary key and lookups should be very fast. I'd personally wrap the whole thing in .NET classes that abstracts it and allow you to create a Dictionary<string,string> of all the URLs which can be loaded into memory from the DB and cached. This will be even faster.
Definitely DON'T use Server.Transfer. Instead you should do a 301 Permanently Moved redirect. This will let search engines know to use the new URL. If you were using NET 4.0 you could use the HttpResponse.RedirectPermanent method. However, in earlier versions you have to set the headers yourself - but this is trivial.
Keep the data in a database, but load into ASP.NET cache to reduce access time.
You definitely want to use HTTPModules. It's the accepted practice, and having recently tried to do it inside Global.asax, I can tell you that unless you want to do only the simplest kind of stuff (i.e. "~/mypage.aspx/3" <-> "~/mypage.aspx?param1=3) it's much more complicated and buggy than it seems.
In fact, I regret even trying to roll my own URL rewriting solution. It's just not worth it if you want something you can depend on. Scott Guthrie has a very good blog post on the subject, and he recommends UrlRewriter.net or UrlRewriting.net as a couple of free, open-source URL rewriting solutions.
Good luck.

Hold global data for an ASP.net webpage

I am currently working on a large-scale website, that is very dynamic, and so needs to store a large volume of information in memory on a near-permanent basis (things like configuration settings for the checkout, or the tree used to implement the menu structure).
This information is not session-specific, it is consistent for every thread using the website.
What is the best way to hold this data globally within ASP, so it can be accessed when needed, instead of re-loaded on each use?
Any AppSettings in web.config are automatically cached (i.e., they aren't read from the XML every time you need to use them).
You could also manually manipulate the cache yourself.
Edit: Better links...
Add items to the cache
Retrieve items from the cache
Caching Application Data
It's not precisely clear whether your information is session specific or not...if it is, then use the ASP Session object. Given your description of the scale, you probably want to look at storing the state in Sql Server:
http://support.microsoft.com/kb/317604
That's the 101 approach. If you're looking for something a little beefier, then check out memcached (that's pronounced Mem-Cache-Dee):
http://www.danga.com/memcached/
That's the system that apps like Facebook and Twitter use.
Good luck!
Using ASP.NET caching feature is a good option I think. In addition to John's answer, you can use Microsoft's Patterns & Practices team's Caching Application Block.
This is a good video exploring the different ways to can retain application state.
http://www.asp.net/learn/3.5-videos/video-11.aspx
It brushes on the Application object which is global for the whole application, for all users and shows you how to create a hit counter (obviously instead of storing an integer you could store objects). If you need to make changes, you do need to use a lock for concurrency, and I'm not sure how it handles LARGE amounts of data because I've never had to keep that much there.
I usually keep things like that in the Application object.
If the pages are dependent upon one another and they post to one another, you could use the page's request object. Probably not the answer you're looking for, but definitely one of the smallest in memory to use.
I have run into the same situation in the past and found an interface to be the most scalable solution. Application cache may be the answer today, but will it scale to meet your needs?
If you need to scale up, you may find cookies, or some type of temp database storage to be the trick. Simply add a new method to your interface, and set the interface to choose the "mode" from web.config.

Categories