Checking integrity of javascript, user code=server code - c#

Is there a way to verify that the user changed the jQuery/JavaScript with Firebug during the use of client-side page?

No, there is no way to verify this.
When it comes to input from the browser, you should always verify and validate. Never trust the client.

No. The client is fundamentally unsafe and belongs to the user, not you.

Short answer it doesn't matter.
Long answer:
It matters if you are treating the JavaScript as part of your application structure, similar to how a SQL injection attack does bad things to your system. You should validate that anything that gets passed from the client is sanitized before being stored. The interesting attack vector here is if you allow me to persist elements into the structure of the web page and retrieve them at a later time. You have opened the doors to a reflected XSS attack (one of my favorites). This is indicative of a failure to sanitize user input and/or a failure to separate concerns UI from the system level code.

Related

MVC Get Vs Post

While going through MVC concepts, i have read that it is not a good practice to have code inside 'GET' action which changes state of server objects( DB updates etc.,).
'Caching of return data' has been given as a reason for this.
Could someone please explain this?
Thanks in advance!
This is by HTTP standard. The GET verb is one that should be idempotent and safe.
9.1.1 Safe Methods
Implementors should be aware that the software represents the user in
their interactions over the Internet, and should be careful to allow
the user to be aware of any actions they might take which may have an
unexpected significance to themselves or others.
In particular, the convention has been established that the GET and
HEAD methods SHOULD NOT have the significance of taking an action
other than retrieval. These methods ought to be considered "safe".
This allows user agents to represent other methods, such as POST, PUT
and DELETE, in a special way, so that the user is made aware of the
fact that a possibly unsafe action is being requested.
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects, so
therefore cannot be held accountable for them.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
Browsers can cache GET requests, generally on static data, like images or scripts. But you can also allow browsers to cache GET requests to controller actions as well, using [OutputCache] or other similar ways, so if caching is turned on for a GET controller action, it's possible that clicking on a link leading to /Home/Index doesn't actually run the Index method on the server, but rather allows the browser to serve up the page from its own cache.
With this line of thinking, you can safely turn on caching on GET actions in which the data you're serving up doesn't change (or doesn't change often), with the knowledge that your server action won't fire every time.
POSTs won't be cached by the browser, so any POST is guaranteed to make it to the server.
Ignore caching for a moment. Another way of thinking about this is that search engines will store HTTP GET links during their indexing/crawling process, therefore they will show up in search results.
Suppose if your /Home/Index is implemented as GET but it lets say deletes a row in your Database, every time this link shows up on a search engine and somebody clicks it, you will have a delete row, and soon you have a lot deleted rows.
The HTTP spec states that GET and HEAD are expected to be idempotent, ie. they should not change server state.
One practical aspect of this, is that search robots will issue GET against any link to your site they know of. If such a GET changes user data it was not meant to change, you are in trouble.
Being idempotent has the added benefit that clients could be able to cache the result of a GET (use HTTP headers to control this).

What is the best practice to handle dangerous characters in asp.net?

What is the best practice to handle dangerous characters in asp.net?
see example: asp.net sign up form
Should you:
use a JavaScript to prevent them from entering it into the textbox in the 1st place?
have a general function that does a find and replace on the server side?
The problem with #1, is it will increase page load time.
ASP .NET handles potentially dangerous characters for you, by default since ASP .NET 2.0. From Request Validation in ASP.NET:
Request validation is a feature in ASP.NET that examines an HTTP
request and determines whether it contains potentially dangerous
content. In this context, potentially dangerous content is any HTML
markup or JavaScript code in the body, header, query string, or
cookies of the request. ASP.NET performs this check because markup or
code in the URL query string, cookies, or posted form values might
have been added for malicious purposes.
Request validation helps prevent this kind of attack. If ASP.NET
detects any markup or code in a request, it throws a "potentially
dangerous value was detected" error and stops page processing.
Perhaps the most important bit of this is that it happens on the server; regardless of the client accessing your application they can not just turn of JavaScript to work around it.
Solution number 1 won't increment load time by much.
You should ALWAYS use solution number 2 along with solution number one, because users can turn off javascript in their browsers.
You accept them like regular characters on the write-side. When rendering you encode your output. You have to encode it anyway regardless of security so that you can display special characters.
What is the best practice to handle dangerous characters in asp.net?
I did not watch the screencast you link to (questions should be self-contained anyway), but there are no dangerous characters. It all depends on the context. Take Stack Overflow for example, it lets me input the characters Dangerous!'); DROP TABLE Questions--. Nothing dangerous there.
ASP.NET itself will do its best to prevent malicious input at the HTTP level: it won't let any user access files like web.config or files outside your web root.
As soon as you start doing something with user input, it's up to you. There's no silver bullet, no one rule that fits them all. If you're going to display the user input as HTML, you'll have to make sure you only allow harmless markup tags without any scriptable attributes. If you're allowing users to upload images, make sure only images get uploaded. If you're going to send input to an RDBMS, be sure to escape characters that have meaning for the database manipulation language.
And so on.
ALWAYS validate input on the server, this should not even be a discussion, just do it!
Client-side validation is just eye candy for the user, but the server is where it counts!
Thinking that
ASP .NET handles potentially dangerous characters for you, by default since ASP .NET 2.0. From Request Validation in ASP.NET:
is like thinking that a solid door will keep a thief out. It won't. It will only slow him. You have to know what are the most common vectors and what are the possible solutions. You must comprehend that every EVERY EVERY variable (field/property) you write in an HTML/CSS/Javascript is a potential attack vector that must be sanitized (through the use of appropriate libraries, like some methods included in newer MVC.NET, or at least the <%: %> of ASP.NET 4.0), no exceptions, every EVERY EVERY query you execute is a potential attach vector that must be sanitized through the exclusive use of ORM and parameterized queries, no exceptions. No passwords must be saved in the db. And tons of other similar things. It isn't very difficult, but laziness, complacence, ignorance will make it harder (if not nearly impossible). If it isn't you that will introduce the hole then it's the programmer on your left, or the programmer on your right. There is not hope.

Validating your site

What else needs to be validated apart from what I have below? This is my question.
It is important that any input to a site is properly validated:
Textboxes, etc – use .NET validators (or custom code if the validators aren’t appropriate)
Querystring or Form values – use manual validation (casting to specific types, boundary checking, etc)
This ties into the problems which XSS can reveal.
Basically you have to validate any input that someone could potentially tamper with:
Form Postbacks (mainly .NET Controls – these can be validated with .NET validation controls. Also if you have Request Validation turned on on all pages, this reduces the risk )
QueryString Values
Cookie values
HTTP Headers
Viewstate (automatically done for you as long as you have ViewState MAC enabled)
Javascript (all JS can be viewed and changed, so need to ensure no crucial functionality is handled by JavaScript- i.e. always enable server side validation)
There is a lot that can go wrong with a web application. Your list is pretty comprehensive, although it is duplication. The http spec only states, GET, POST, Cookie and Header. There are many different types of POST, but its all in the same part of the request.
For your list I would also add everything having to do with file upload, which is a type of POST. For instance, file name, mime type and the contents of the file. I would fire up a network monitoring application like Wireshark and everything in the request should be considered potentially harmful.
There will never be a one size fits all validation function. If you are merging sql injection and xss sanitation functions then you maybe in trouble. I recommend testing your site using automation. A free service like Sitewatch or an open source tool like skipfish will detect methods of attack that you have missed.
Also, on a side note. Passing the view state around with a MAC and/or encrypted is a gross misuse of cryptography. Cryptography is tool used when there is no other solution. By using a MAC or encryption you are opening the door for an attacker to brute force this value or use something like oracle padding attack to take advantage of you. A view state should be kept track by the server, period end of story.
I would suggest a different way of looking at the problem that is orthogonal to what you have here (and hence not incompatible, there's no reason why you can't examine it both ways in case you catch with one what you miss with another).
The two things that are important in any validation are:
Things you pay attention to.
Things you pass to another layer untouched.
Now, most of the things you've mentioned so far fit into the first cateogry. Cookies that you ignore fit into the second, as would query & post information if you passed to another handler with Server.Execute or similar.
The second category is the most debatable.
On the one hand, if a given handler (.aspx page, IHttpHandler, etc.) ignores a cookie that may be used by another handler at some point in the future, it's mostly up to that other handler to validate it.
On the other hand, it's always good to have an approach that assumes other layers have security holes and you shouldn't trust them to be correct, even if you wrote them yourself (especially if you wrote them yourself!)
A middle-ground position, is that if there are perhaps 5 different states some persistant data could validly be in, but only 3 make sense when a particular piece of code is hit, it might verify that it is in one of those 3 states, even if that doesn't pose a risk to that particular code.
That done, we'll concentrate on the first category.
Querystrings, form-data, post-backs, headers and cookies all fall under the same category of stuff that came from the user (whether they know it or not). Indeed, they are sometimes different ways of looking at the same thing.
Of this, there is a subset that we will actually work upon in any way.
Of that there is a range of legal values for each such item.
Of that, there is a range of legal combinations of values for the items as a whole.
Validation therefore becomes a matter of:
Identify what input we will act upon.
Make sure that each component of that input is valid in its own right.
Make sure that the combinations are valid (e.g it may be valid to not send a credit card number, but invalid to not send one but set payment type to "credit card").
Now, when we come to this, it's generally best not to try to catch certain attacks. For example, it's not so good to avoid ' in values that will be passed to SQL. Rather, we have three possibilities:
It's invalid to have ' in the value because it doesn't belong there (e.g. a value that can only be "true" or "false", or from a set list of values in which none of them contain '). Here we catch the fact that it isn't in the set of legal values, and ignore the precise nature of the attack (thus being protected also from other attacks we don't even know about!).
It's valid as human input, but not as what we will use. An example here is a large number (in some cultures ' is used to separate thousands). Here we canonicalise both "123,456,789" and "123'456'789" to 123456789 and don't care what it was like before that, as long as we can meaningfully do so (the input wasn't "fish" or a number that is out of the range of legal values for the case in hand).
It's valid input. If your application blocks apostrophes in name fields in an attempt to block SQL-injection, then it's buggy because there are real names with apostrophes out there. In this case we consider "d'Eath" and "O'Grady" to be valid input and deal with the fact that ' is significant in SQL by escaping properly (ideally by using an API for data access that will do this for us.
A classic example of the third point with ASP.NET is code that blocks "suspicious" input with < and > - something that makes a great number of ASP.NET pages buggy. Granted, it's better to be buggy in blocking that inappropriately than buggy by accepting it inappropriately, but the defaults are for people who haven't thought about validation and trying to stop them from hurting themselves too badly. Since you are thinking about validation, you should consider whether it's appropriate to turn that automatic validation off and then treat < and > in a manner appropriate for your given use.
Note also that I haven't said anything about javascript. I don't validate javascript (unless perhaps I was actually receiving it), I ignore it. I pretend it doesn't exist and then I won't miss a case where its validation could be tampered with. Pretend yours doesn't exist at this layer too. Ultimately client-side validation is to save the good guys making honest mistakes time, not to twart the bad guys.
For similar reasons, this is best not tested through a browser. Use Fiddler to construct requests that hit the validation points you want to examine. This way all client-side validation is by-passed, and you're looking at the server the same way an attacker will.
Finally, remember that a page with 100% perfect validation is not necessarily secure. E.g. if your validation is perfect but your authentication poor then someone can send "valid" code to it that will be just - perhaps more - nasty as the more classic SQL-injection of XSS code. That hits onto other topics that are for other questions, except that validation as discussed here is only part of the puzzle.

Securly Passing variable values from one page to another page

When passing variable from one page to another
To avoid the user messing around with the URL parameter Values
Is it best to ...
1) pass the variable via session
2) pass the variable in the URL along with a signature
As long as you're passing in a signature, it wouldn't matter where are you passing the values because you will always check for the signature integrity
What I would do is pass everything (including the signature) in the session. Just to keep the URL clean. But that's up to you and your particular use case.
If you use the session, the user cannot control the contents of the values.
Also, if you have view state encryption enabled, you could use the view state. The advantage of the view state is that it's localized to a single page. This means that when the user has two tabs open of your website, the variables are localized to the specific tabs.
See http://www.codeproject.com/KB/viewstate/AccessViewState.aspx for how to access view state from another page.
Depends on your use case. Session IS in most cases safer. If someone can compromise your server to get your session data, then you have different things to worry about. It would be bad though if you store session data in a place where other people can see it ;-).
URL signature could theoretically be brute-forced. Since the parameters are probably short and they may be sometimes predictable it may give someone who knows about encryption some point of attack. This is not trivial though. But if security is top option for you then I'd not allow this data to leave your server.
If you are really worried user going crazy and stripping down params, then you can go with Session states, however you may lose history, i.e Back Forward buttons.
The second option looks good but if user is stripping things you can't be sure that the param even existed.
So a mix of both looks good.

Are Querystrings in .NET Good Practice?

I'm developing a web app that has a database backend. In the past I'm done stuff like:
http://page.com/view.aspx?userid=123 to view user 123's profile; using a querystring.
Is it considered good practice to use a querystring? Is there something else I should be doing?
I'm using C# 4.0 and ASP.net.
Your question isn't really a .NET question... it is a concern that every web framework and web developer deals with in some way.
Most agree that for the main user facing portion of your website you should avoid long query strings in favor of a url structure that makes "sense" to the website visitor. Try to use a logical hierarchy that when the visitor reads it there is a good chance they can deduce where they are on the site. Click around StackOverflow in a few areas and see what they have done with the url's. You usually have a pretty good idea what you're looking at and where you are.
A couple of other heads up... Although a lot of database lookups are done with the primary key it's also a good idea to provide a user friendly name of the resource in your url instead of just the primary key. You see StackOverflow doing that in the current address where they're doing the lookup with the primary key "3544483" but also including an SEO/user friendly url paramenter "are-querystrings-in-net-good-practice." If someone emailed you that link you'd have a pretty good idea of what you're about to open up.
I'm not really sure how WebForms handles Url Routing but if you're struggling to grasp the concepts go through the MVC NerdDinner tutorial. They cover some basic url routing in there that could help.
Query String are perfectly fine if you're sure to lock down what people are meant to view.. You should be checking for a valid value (number, not null, etc..) and if your application has security, whether a Visitor has permission to view User 1245's profile..
You could look into Session & ViewState, but QueryString seems to be what you're after.
If possible, I think this practice should be avoided especially if you're passing auto-incrementing ids in plain text. In my opinion, you're almost teasing the user to manipute the querystring value and see if they can get access to someone else's profile. Even with appropriate security measures in place (validating the request on the server-side before rendering the page), I would still recommend encrypting the querystring param in this particular case.
I think using query strings is perfectly fine, but there's a case to be made for hackable URLs, in that they are more understandable to advanced users and are SEO-friendly. For example, I happen think http://www.example.com/user/view/1234 looks more intuitive than http://www.example.com/view.aspx?user=1234.
And you don't have to alter your application to use pretty URLs if you're using IIS 7.0. The URL Rewrite Module and a few rewriting rules should be enough.
To answer clearly at your question: yes it't a good pratice. In fact it's an expected behavior of a web site.
I'm totaly agree with ShaderOp and you should use a url rewritter to get an nice loocking url. In fact I'm assuming that you will put a bit of validation to avoid someone manipulating the url and access to data they don't desserve.
Query string are ok, but don´t compromise security with them.
If the profile you are accessing is the current logged in user, there´s no need to send in the uid. Just go to /profile and load the current logged in user information.
if you are looking at other member profile, i recommend to just go with it´s 'username', an encrypted id or a Guid.
Exposing user ids to clients are generally not a good idea.

Categories