How does WebResources.axd or ScriptResources.axd actually work? - c#

Where can I learn how WebResources.axd or ScriptResources.axd actually works?
What is the string that is appended to the .axd? Does this string change, or is it constant? Is it page, session specific? Can these files be cached on a proxy?
How does it work internally? This is especially important after the ASP.NET vulnerability was discovered... as other people may want to not implement similar coding errors.
My understanding is that an encrypted key is used to direct how they operate.. (machine key) but I don't know much more.

You might want to check out the answers to this other question on StackOverflow: ScriptResource.axd d query string parameter.
It seems like these are just static javascript resources, where the query string is a hash identifying the DLL version they're housed in.
To see if the content changes at all for different pages & requests to the same iis application, you could use any number of tools -- e.g. Firebug's net panel in Firefox -- to view the http request & response bodies, then diff them with e.g. WinMerge to see if the content is changing.

Related

Clearing caches across browsers not working when hosting in iis [duplicate]

I want my JavaScript that's referenced through <script> tags, my css, and various graphic files to be cached on the user browser for a given release of my site. However, when I release an update to a file, I would like to ensure that the new content (Javascript, css, graphics, etc) will be updated on the user's machine.
In researching this problem, I've come across a number of possible solutions:
Adjust http headers for things like cache control and expires
Append a unique querystring to each resource request
Include a version number in the path (or filename) of the resource being requested
My concern with option 1 (other than not knowing how to implement it for some content types) is that if there's an intermediate proxy between the browser and IIS, then the intermediate proxy may not respect the http headers, and the content would be cached between the browser and the proxy.
My concern with option 2 is two fold. For one, if I just use a random number or timestamp, then the browser will request a new resource every time, bypassing the local cache even when the content hasn't changed (when not between releases). A workaround to this problem is to use the timestamp of the resource file or a hash of the file; this would change only when a new release occurs. However, this lead to my second concern: I understand that some web proxies never cache anything with a query string, by default (http://stackoverflow.com/questions/5541340/how-can-i-prevent-javascript-caching-querystring-approach-isnt-working). While this is likely customizable, I wouldn't want to rely on the administrator changing a setting from the default, in order to get the performance benefit of caching.
Option 3 seems like the best choice, but I don't know how to implement it in a practical manner, for ASP .NET MVC (currently using ASP .NET MVC 3). By hand, I could go into every file that links to a graphic, css or external javascript file and change the path to include a different version number for each release, but clearly this is tedious and error prone.
My questions then are:
1) Are there any other strategies to avoid caching (for a given release) that I should consider and
2) assuming I use option 3 (including a version number in the path of the resource), how I can I achieve this in an ASP .NET MVC application, in a way that is realistic to maintain?
Thanks,
Notre
The idea is whenever the version of the js, css files changes you have to change the URL from the server so the browser request for the new version.
Please refer the answer of #Kip in this thread that clearly describes about this. Though the answer is in php it can be implemented in asp.net mvc itself.
If you are using the new bundling/minification feature of ASP.NET MVC 4 I think the versioning be taken care automatically by it(not sure though).
Http caching is done in 2 ways. By validation mechanism and by expiration mechanism.
For Expiration mechanism you could use expires on header(but this is good for static resource which you know would never change).
For validation mechanism and conditional get ETags are used which seems to be a perfect case for your problem.

Should I use Request.UrlReferrer when determining referrals

I came upon an interesting discussion with my team around the use of HttpRequest.UrlReferrer and wanted to solicit feedback from the community. According to the W3C spec:
The Referer[sic] request-header field
allows the client to specify, for the
server's benefit, the address (URI) of
the resource from which the
Request-URI was obtained (the
"referrer", although the header field
is misspelled.) The Referer
request-header allows a server to
generate lists of back-links to
resources for interest, logging,
optimized caching, etc. It also allows
obsolete or mistyped links to be
traced for maintenance. The Referer
field MUST NOT be sent if the
Request-URI was obtained from a source
that does not have its own URI, such
as input from the user keyboard.as input from the user keyboard.
The Request.UrlReferrer object does the work of converting referral strings that contain well formed URIs to an object with properties on every request.
According to our logs there are requests that come in that contain invalid data in the referral such as:
localhost
app:/BeamBackTest.swf
app:/multtiple.swf
app:/AFriendFeed.swf
ALToolBar
app:/index.html
mhtml:file://C:\Documents+and+Settings\User\Desktop\oracle\What+is+a+View+in+Oracle+-+Stack+Overflow.mht
Using Request.UrlReferrer would mean the above cases would be NULL. Is it better to discard the invalid data based on the W3C spec by using Request.UrlReferrer or preserve it by using Request.ServerVariables["HTTP_REFERER"] even though the data may be interesting, but potentially useless.
Well, it depends on what you're doing with the data. If you're doing something that requires a valid URI, then you might as well use Request.UrlReferrer since you have to strip it down anyway. But if all you're doing is logging it, and your logging tool can handle weird things, I'd recommend the second method- especially for the app:/ data, which is likely to yield useful patterns.
Its subjective based on your needs. We store everything even if its junk. One day we might know how to process it better and it won't be junk anymore. If we didn't store it because we didn't understand it at the time we'd never be able to get it back.

Are Querystrings in .NET Good Practice?

I'm developing a web app that has a database backend. In the past I'm done stuff like:
http://page.com/view.aspx?userid=123 to view user 123's profile; using a querystring.
Is it considered good practice to use a querystring? Is there something else I should be doing?
I'm using C# 4.0 and ASP.net.
Your question isn't really a .NET question... it is a concern that every web framework and web developer deals with in some way.
Most agree that for the main user facing portion of your website you should avoid long query strings in favor of a url structure that makes "sense" to the website visitor. Try to use a logical hierarchy that when the visitor reads it there is a good chance they can deduce where they are on the site. Click around StackOverflow in a few areas and see what they have done with the url's. You usually have a pretty good idea what you're looking at and where you are.
A couple of other heads up... Although a lot of database lookups are done with the primary key it's also a good idea to provide a user friendly name of the resource in your url instead of just the primary key. You see StackOverflow doing that in the current address where they're doing the lookup with the primary key "3544483" but also including an SEO/user friendly url paramenter "are-querystrings-in-net-good-practice." If someone emailed you that link you'd have a pretty good idea of what you're about to open up.
I'm not really sure how WebForms handles Url Routing but if you're struggling to grasp the concepts go through the MVC NerdDinner tutorial. They cover some basic url routing in there that could help.
Query String are perfectly fine if you're sure to lock down what people are meant to view.. You should be checking for a valid value (number, not null, etc..) and if your application has security, whether a Visitor has permission to view User 1245's profile..
You could look into Session & ViewState, but QueryString seems to be what you're after.
If possible, I think this practice should be avoided especially if you're passing auto-incrementing ids in plain text. In my opinion, you're almost teasing the user to manipute the querystring value and see if they can get access to someone else's profile. Even with appropriate security measures in place (validating the request on the server-side before rendering the page), I would still recommend encrypting the querystring param in this particular case.
I think using query strings is perfectly fine, but there's a case to be made for hackable URLs, in that they are more understandable to advanced users and are SEO-friendly. For example, I happen think http://www.example.com/user/view/1234 looks more intuitive than http://www.example.com/view.aspx?user=1234.
And you don't have to alter your application to use pretty URLs if you're using IIS 7.0. The URL Rewrite Module and a few rewriting rules should be enough.
To answer clearly at your question: yes it't a good pratice. In fact it's an expected behavior of a web site.
I'm totaly agree with ShaderOp and you should use a url rewritter to get an nice loocking url. In fact I'm assuming that you will put a bit of validation to avoid someone manipulating the url and access to data they don't desserve.
Query string are ok, but don´t compromise security with them.
If the profile you are accessing is the current logged in user, there´s no need to send in the uid. Just go to /profile and load the current logged in user information.
if you are looking at other member profile, i recommend to just go with it´s 'username', an encrypted id or a Guid.
Exposing user ids to clients are generally not a good idea.

ASP.NET URL remapping &redirection - Best Practice needed

This is the scenario: I have a list of about 5000 URLs which have already been published to various customers. Now, all of these URLs' location has changed on my server side. The server is still the same though. This is a ASP.NET website with .NET3.5/C#.
My requirement is : Though the customers use the older source URL they should be redirected to the new URL without any perceived change or intermediate redirection message etc.
I am trying to make sense of the whole scenario:
Where would I put the actual mapping of Old URL to New URL -- in a database or some config. file or is there a better option?
How would I actual implement a redirect:
Should I write a method with Server.Transfer ot Response.Redirect?
And is there a best practice to it like - placing the actual re-routing in HTTPModules..or is it Application_BeginRequest?
I am looking to achieve with a best-practice compliant methodology and very low performance degradation, if any.
If your application already uses a database then I'd use that. Make the old URL the primary key and lookups should be very fast. I'd personally wrap the whole thing in .NET classes that abstracts it and allow you to create a Dictionary<string,string> of all the URLs which can be loaded into memory from the DB and cached. This will be even faster.
Definitely DON'T use Server.Transfer. Instead you should do a 301 Permanently Moved redirect. This will let search engines know to use the new URL. If you were using NET 4.0 you could use the HttpResponse.RedirectPermanent method. However, in earlier versions you have to set the headers yourself - but this is trivial.
Keep the data in a database, but load into ASP.NET cache to reduce access time.
You definitely want to use HTTPModules. It's the accepted practice, and having recently tried to do it inside Global.asax, I can tell you that unless you want to do only the simplest kind of stuff (i.e. "~/mypage.aspx/3" <-> "~/mypage.aspx?param1=3) it's much more complicated and buggy than it seems.
In fact, I regret even trying to roll my own URL rewriting solution. It's just not worth it if you want something you can depend on. Scott Guthrie has a very good blog post on the subject, and he recommends UrlRewriter.net or UrlRewriting.net as a couple of free, open-source URL rewriting solutions.
Good luck.

Compressing parameters in the URL

The urls on my site can become very long, and its my understanding that urls are transmitted with the http requests. So the idea came to compress the string in the url.
From my searching on the internet, i found suggestions on using short urls and then link that one to the long url. Id prefer to not use this solution because I would have to do a extra database check to convert between long and short url.
That leaves in my head 3 options:
Hashing, I don't think this is a option. If you want a safe hashing algorithm, its going to be long.
Compressing the url string, basically having the server depress the string when when it gets the url parameters.
Changing the url so its not descriptive, this is bad because it would make development harder for me (This is a 1 man project).
Considering the vast amount possible amount of OS/browsers out there, I figured id as if anyone else has tried this or have some clever suggestions.
If it maters the url parameters can reach 100+ chars.
Example:
mysite.com/Reports/Ability.aspx?PlayerID=7737&GuildID=132&AbilityID=1140&EventID=1609&EncounterID=-1&ServerID=17&IsPlayer=True
EDIT:
Let me clarify atm this is NOT breaking the site. Its more about me learning to find a good solution ( Im well aware this is micro optimization, my site is very fast atm ) and making my site even faster ( To challenge myself, and become a better coder ).
There is also a cosmetic issue, I personal think that a URL longer then the address bar looks bad.
You have some conflicting requirements as you want to shorten/compress the url without making it less descriptive. By the very nature of shortening the URL, you will, to a certain extent, make it less descriptive.
As I understand it, your goal is to optimise by sending less over the request. You mention 100+ characters, instead of 1000+ which I assume means they don't get that big? In which case, I'd see this as an unnecessary micro-optimisation.
To add to previous suggestions of using POST, a simple thing would be to just shorten the keys instead of using full names if you don't want to do full url shortening e.g.:
mysite.com/Reports/Ability.aspx?pid=7737&GID=132&AID=1140&EID=1609&EnID=-1&SID=17&IsP=True
These are obviously less descriptive.
But like I said, are you having a real problem with having long URLs?
I'm not sure I understand what's your problem with long URLs? Generally I'd try to avoid them, but if that's necessary then you won't depend on the user remembering it anyway, so why go through all the compressing trouble? Even with a URL of 1000 chars (~2KB) the page request won't be slow.
I would, however, consider using POST instead of GET if possible, to prettify the URL, but that's of course depends on your implementation / environment.
It is recommended a few times here to use POST instead of GET. I would strongly recommend AGAINST picking your HTTP action by what the URL looks like. There is more to this choice than how it is displayed in the browser.
A quick overview:
http://www.w3.org/2001/tag/doc/whenToUseGet.html#checklist
A few options to add to the other answers:
Using a subclassed LinkButton for your navigation. This holds the extra data (PlayerId for example) inside its viewstate as properties. This won't be much help though if you're giving URLs to people via emails.
Use the MVC routing engine to produce slightly improved URLs - no keys for the querystring. e.g. mysite.com/Reports/Ability/7737/132/1140/1609/-1/17/True
Create your own URL shortener like tinyurl.com. Store the url in the database along with each of the querystring values to lookup.
Simply setup some friendly URLs for the most popular reports, for example mysite.com/Reports/JanuaryReport. You can also do this using the MVC routing engine.
The MVC routing engine is stand alone and can work without your site being an MVC site.
With my scheme, one could encode the params section of a URL as a base64 string which is ~50% shorter than a direct base64 representation. So for your case you get:
~50% shorter params section
a base 64 string which hides a lot of the detail
see http://blog.alivate.com.au/packed-url/
Most browsers can handle up 2048 characters in URL; if you don't feel like to use a long parameter list, you can always to pass parameters through POST requests.
There are theoretical problems with extended URLs. The exact limit varies across browser (roughly 2k in sort versions of IE) and server (4-8k in Apache, varying on version and configuration), and isn't officially specified in any RFC that I am aware of.
I would agree with synhershko, and replace the URL with form POST parameters instead if you are concerned that your URLs are growing too long.
I've encountered similar situations in the past, although my reasons for optimisation were for SEO. To me it depends on what you're doing with the page URL variables, are they being appended on all/most pages? If they are then to me there is almost always a much better way, although if you're far down the development path it's probably too late now.
I like being able to 'read' a URL, especially when I drop into an unknown site 2 or more layers deep in the navigation and there site is designed poorly, it's often the easiest and fastest way for an advanced user to find where they are on the site.
If you're interested in it from an SEO point of view, its normally best to have a hierarchy which only contains: / - _
Search engines will try and read URL's, see this video by Matt Cutts (can't remember how far into the video he mentions it but it's a good watch anyway...)
Any form of compression of the URL (hashing, compressing, non-descriptive) is going to:
make the urls harder to read, remember and type in correctly
have a performance impact as you will have to decrypt/decompress/convert the url before you can work with it.
Also, hashing is usually considered to be non-reversible - given a hashed value you shouldn't be able to work out what generated it, but you could use it to look up a value in a database, which gets you back to your first issue of short-long lookups.
You could easily just remove the redundant "ID" at the end of each parameter, and possibly strip out vowels or similar to "shorten" the url without losing too much from the semantics of the request.
But to be honest, the length of your URL is one of the least things to worry about in terms of performance - look at the size of any cookies you're sending back and forth between the browser and the server, and the page size you're sending back.

Categories