I have an ASP.NET C# application and I am getting different errors all the time, like the one you can see below. Does anyone know how to fix that? Thank you.
Page: http://www.sitename.com/WebResource.axd?d=OYuYekAZWSmOdOaJyDRqKg2&t=634022222718906250
Message: This is an invalid webresource request.
Source: System.Web
Inner Exception:
Stack Trace: at System.Web.Handlers.AssemblyResourceLoader.System.Web.IHttpHandler.ProcessRequest(HttpContext context)
at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
I can think of three causes, only one of which actually point to a real error as such:
Somebody is trying to exploit a security hole in asp.net (a patch here) where you can get one of the axd handlers to serve up any content on the web server if you can figure out the encryption key. It is unlikely, but its possible. In this case, you don't have to do anything, except make sure that you have that fix applied.
(This is something I have observed on a site that I just finished) The scenario is where an existing site has been replaced or where the web server has been changed from what it was previously. If you have a site that is public and on Google, say, people might often view 'cached' page if they're trying to get to old content or indeed if the current page is not what they expect (perhaps an error or indeed just different). The problem being that if the site uses webresource.axd, that page will have a reference to it. The browser opens the cached html and then makes the request. If the site has been changed or is a replacement then the old axd links might not be valid any more and will cause an error.
If this is an mvc site you might no longer be using resources rendered by the axds, in which case you could consider removing them from the site by editing the web config and adding 'remove' entries for them in the handlers section of system.webserver. Then the requests will yield a 404 and no longer will you get errors. Equally if the site does use axds legitimately, but you can't repro the errors yourself by browsing it, then you probably don't have anything to worry about.
the site runs in a web farm and each machine therefore needs to have the same machine key, this is the one actually needs attention if it is the case. I left it till last, tho, because looking at your other questions I'm assuming a web farm is not involved :)
I would have liked to format this answer better but I am on my phone, apologies!
Additionally to Andras thorough (and impressive from a phone:) answer, there was a bug in IE8 (patched already, but some may still have it in the wild) that use to cause a similar error. Although in that case the request came with garbage at the end, which does not seem to be your case, but it may give you a different thing to look at. Here's a question on SO that talks about it
Invalid Webresource.axd parameters being generated
Related
I have a a requirement to control the ability for a client to download images/files based on who they are. We are calling an action with a parameter that allows me to sub in data from session to finish a path, without putting the real path on the client. We then return a FileActionResult from the controller.
It was working. If we found the image, we could create a stream and serve the file, and if we could not, we returned a default image. But, we found that we could easily run into an issue where the stream was open when a new request was made, which resulted in an error. This could then turn into an issue where even the default image could not be downloaded.
I have looked around a lot and we have tested several methods, and these conflicts can still occur. I have started to think that maybe this is not an IO issue, but more of an, "I'm trying to do something wrong" issue.
Is there a way to intercept a call for a static resource, and then adapt the path if it is looking in a specific location, without imposing the rule on all request?
The closest I have found is when creating a View Expander, where I can re-interoperate the path of a called resource, but it is not the same. One is compiled, the other is not.
I don't have any code to show because the approach is uncertain, and unknow. Searches have proven difficult because the terms collide with well know solutions to topics that do not apply.
I am hoping that someone who is more knowledgeable can point me to a method that will treat files in a secure folder as if they are static resources once I have determined they are authorized to access the static resource.
I am using Identity, but I do not extend that identity to system access, nor will I. The only user allowed can be the IISUsr, per my client.
Any help would be greatly appreciated!
I'm pretty inexperienced when it comes to working with IIS, so I apologize if the question is a bit confusing.
In the application, I have a Controller with a method called 'Login' that takes in a string parameter. The parameter identifies the organization the user is trying to authenticate against.
For example:
http://mysite1.com/Login/12345
Visiting this link brings the user to a login page for the organization that is associated with '12345' for their access key.
Is there any way to redirect users that are logging in under '12345' to another server? We have a few beta users that are willing to participate, but the database schemas for both servers are different, so it's important that the beta users are not hitting the wrong site.
After the user logs in, the access key is no longer in the URL, so I can't do matches against it.
I'd like for the user to see the following URL:
http://mysite1.com/Login/12345
http://mysite1.com/Products/
http://mysite1.com/Admin/
While in reality they're on a different server:
http://mysite2.com/Products/
http://mysite2.com/Admin/
I have to emphasize that I really do need the URL to stay 'mysite1' for the user, when in reality they'll be on 'mysite2'. Please let me know if this is possible or not, or if there's a better solution for it.
Sorry if this is a confusing scenario or if there's some information that I'm missing. I'll make edits if necessary.
Virtually anything is possible, but this approach seems really painful.
IIS can perform URL rewriting but it's going to be doing this before it hits the authentication layer so it will not be possible to differentiate users at that level.
It seems like the best option will be to write a custom URL rewriter provider. Looks like this post is attempting to solve it that way.
It really seems better to either redirect to a different server (which I know you're saying you can't do) or merge the multiple versions of functionality into a single app (with different controls/backend models, etc.)
This link may help in understanding a little bit about how the flow works in an ASP.NET MVC app.
While going through MVC concepts, i have read that it is not a good practice to have code inside 'GET' action which changes state of server objects( DB updates etc.,).
'Caching of return data' has been given as a reason for this.
Could someone please explain this?
Thanks in advance!
This is by HTTP standard. The GET verb is one that should be idempotent and safe.
9.1.1 Safe Methods
Implementors should be aware that the software represents the user in
their interactions over the Internet, and should be careful to allow
the user to be aware of any actions they might take which may have an
unexpected significance to themselves or others.
In particular, the convention has been established that the GET and
HEAD methods SHOULD NOT have the significance of taking an action
other than retrieval. These methods ought to be considered "safe".
This allows user agents to represent other methods, such as POST, PUT
and DELETE, in a special way, so that the user is made aware of the
fact that a possibly unsafe action is being requested.
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects, so
therefore cannot be held accountable for them.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
Browsers can cache GET requests, generally on static data, like images or scripts. But you can also allow browsers to cache GET requests to controller actions as well, using [OutputCache] or other similar ways, so if caching is turned on for a GET controller action, it's possible that clicking on a link leading to /Home/Index doesn't actually run the Index method on the server, but rather allows the browser to serve up the page from its own cache.
With this line of thinking, you can safely turn on caching on GET actions in which the data you're serving up doesn't change (or doesn't change often), with the knowledge that your server action won't fire every time.
POSTs won't be cached by the browser, so any POST is guaranteed to make it to the server.
Ignore caching for a moment. Another way of thinking about this is that search engines will store HTTP GET links during their indexing/crawling process, therefore they will show up in search results.
Suppose if your /Home/Index is implemented as GET but it lets say deletes a row in your Database, every time this link shows up on a search engine and somebody clicks it, you will have a delete row, and soon you have a lot deleted rows.
The HTTP spec states that GET and HEAD are expected to be idempotent, ie. they should not change server state.
One practical aspect of this, is that search robots will issue GET against any link to your site they know of. If such a GET changes user data it was not meant to change, you are in trouble.
Being idempotent has the added benefit that clients could be able to cache the result of a GET (use HTTP headers to control this).
I hope someone can cure my ignorance. I have encountered the following problem before and I can't seem to understand why it is happening.
I'm working on a project that has two web fronts running under different addresses, which are essentially pointing to different directories under the same ASP.Net MVC2 project witch different master pages.
Now I've created 3 sets of key pairs, one for localhost, one for site1, and one for site2, thinking ReCaptcha will only tell me the challenge is good, if it comes from the appropriate host with the matching keys (and if I answer correctly ofc.).
The reality is no matter which keys I set up, be it localhost or otherwise, the response is always positive.
Note that I've only tried this on my puny little home PC as a test project, so I don't know if things would go down differently once deployed to the production environment.
Thanks
I don't think you need separate keys for each website. I use the same keys on localhost and my live site and haven't had any trouble.
I'd suggest just use the one set of keys, and just try getting it up and running with one website. It does work, I've been using it for about 2 years.
Failing that show us your code :)
I'm developing a web app that has a database backend. In the past I'm done stuff like:
http://page.com/view.aspx?userid=123 to view user 123's profile; using a querystring.
Is it considered good practice to use a querystring? Is there something else I should be doing?
I'm using C# 4.0 and ASP.net.
Your question isn't really a .NET question... it is a concern that every web framework and web developer deals with in some way.
Most agree that for the main user facing portion of your website you should avoid long query strings in favor of a url structure that makes "sense" to the website visitor. Try to use a logical hierarchy that when the visitor reads it there is a good chance they can deduce where they are on the site. Click around StackOverflow in a few areas and see what they have done with the url's. You usually have a pretty good idea what you're looking at and where you are.
A couple of other heads up... Although a lot of database lookups are done with the primary key it's also a good idea to provide a user friendly name of the resource in your url instead of just the primary key. You see StackOverflow doing that in the current address where they're doing the lookup with the primary key "3544483" but also including an SEO/user friendly url paramenter "are-querystrings-in-net-good-practice." If someone emailed you that link you'd have a pretty good idea of what you're about to open up.
I'm not really sure how WebForms handles Url Routing but if you're struggling to grasp the concepts go through the MVC NerdDinner tutorial. They cover some basic url routing in there that could help.
Query String are perfectly fine if you're sure to lock down what people are meant to view.. You should be checking for a valid value (number, not null, etc..) and if your application has security, whether a Visitor has permission to view User 1245's profile..
You could look into Session & ViewState, but QueryString seems to be what you're after.
If possible, I think this practice should be avoided especially if you're passing auto-incrementing ids in plain text. In my opinion, you're almost teasing the user to manipute the querystring value and see if they can get access to someone else's profile. Even with appropriate security measures in place (validating the request on the server-side before rendering the page), I would still recommend encrypting the querystring param in this particular case.
I think using query strings is perfectly fine, but there's a case to be made for hackable URLs, in that they are more understandable to advanced users and are SEO-friendly. For example, I happen think http://www.example.com/user/view/1234 looks more intuitive than http://www.example.com/view.aspx?user=1234.
And you don't have to alter your application to use pretty URLs if you're using IIS 7.0. The URL Rewrite Module and a few rewriting rules should be enough.
To answer clearly at your question: yes it't a good pratice. In fact it's an expected behavior of a web site.
I'm totaly agree with ShaderOp and you should use a url rewritter to get an nice loocking url. In fact I'm assuming that you will put a bit of validation to avoid someone manipulating the url and access to data they don't desserve.
Query string are ok, but don´t compromise security with them.
If the profile you are accessing is the current logged in user, there´s no need to send in the uid. Just go to /profile and load the current logged in user information.
if you are looking at other member profile, i recommend to just go with it´s 'username', an encrypted id or a Guid.
Exposing user ids to clients are generally not a good idea.