While going through MVC concepts, i have read that it is not a good practice to have code inside 'GET' action which changes state of server objects( DB updates etc.,).
'Caching of return data' has been given as a reason for this.
Could someone please explain this?
Thanks in advance!
This is by HTTP standard. The GET verb is one that should be idempotent and safe.
9.1.1 Safe Methods
Implementors should be aware that the software represents the user in
their interactions over the Internet, and should be careful to allow
the user to be aware of any actions they might take which may have an
unexpected significance to themselves or others.
In particular, the convention has been established that the GET and
HEAD methods SHOULD NOT have the significance of taking an action
other than retrieval. These methods ought to be considered "safe".
This allows user agents to represent other methods, such as POST, PUT
and DELETE, in a special way, so that the user is made aware of the
fact that a possibly unsafe action is being requested.
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects, so
therefore cannot be held accountable for them.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
Browsers can cache GET requests, generally on static data, like images or scripts. But you can also allow browsers to cache GET requests to controller actions as well, using [OutputCache] or other similar ways, so if caching is turned on for a GET controller action, it's possible that clicking on a link leading to /Home/Index doesn't actually run the Index method on the server, but rather allows the browser to serve up the page from its own cache.
With this line of thinking, you can safely turn on caching on GET actions in which the data you're serving up doesn't change (or doesn't change often), with the knowledge that your server action won't fire every time.
POSTs won't be cached by the browser, so any POST is guaranteed to make it to the server.
Ignore caching for a moment. Another way of thinking about this is that search engines will store HTTP GET links during their indexing/crawling process, therefore they will show up in search results.
Suppose if your /Home/Index is implemented as GET but it lets say deletes a row in your Database, every time this link shows up on a search engine and somebody clicks it, you will have a delete row, and soon you have a lot deleted rows.
The HTTP spec states that GET and HEAD are expected to be idempotent, ie. they should not change server state.
One practical aspect of this, is that search robots will issue GET against any link to your site they know of. If such a GET changes user data it was not meant to change, you are in trouble.
Being idempotent has the added benefit that clients could be able to cache the result of a GET (use HTTP headers to control this).
Related
I want to build an "audit trail" for all requests incoming to the server, however it needs to be specific per user, per web page.
For instance I imagine something like this:
On initial view render I would store (cookie/ page variable/ something else) a unique Id saying the user browsed to /myapp.com/dashboard/1234. - maybe in the layout.cshtml.
Then the app fires off X number of GET/ POST requests to the server each having that same unique Id initially tied to the view rendered.
This allows me then to tie back all requests for a page and add up the server execution time.
I tried using path specific cookies but this won't work I realized since a user can have many tabs open with the same url. Also the user works in many areas of the app at once. They can have anywhere from 1 to 10+ tabs open. Each of these should have it's own unique Id and "audit trail" of all calls taking place on that page.
This is an existing app so modifying each of the GET/ POST to pass in the unique Id is out of scope. Just hoping I am missing something that might take care of this.
Thank you!
If I'm understanding you correctly, you have a single page load, and then additional requests made either for images and other resources or AJAX requests that you want tied to and tracked along with that initial page load.
The chief problem you're going to have here is that, based on the way HTTP works, each request is handled as its own thing and not considered as part of a greater whole. The web browser makes it all look seamless, but all the web server is doing is just responding to a bunch of (as far as it knows) unrelated requests for various different things. To track them all as one unit, you would either need to attach some unique id to the request itself (for a GET, that would be either as part of the URI path or query string) or lean on Session to introduce state between the requests. However, session state really only works in this scenario when all requests can be tied to a single initial request. Once the user starts working with multiple different pages at once, there's no reasonable to discern which request belongs to what, and you're back in the same boat.
In other words, your only real option is to send something along with the request, which would mean doing something like:
<link rel="stylesheet" type="text/css" href="/path/to/file.css?origin=#Request.RawUrl" />
Then, you could have an action filter that looks for origin in the query string of any request, and ties it to the logging for that particular page.
For what it's worth, it should be noted that by default, IIS will handle all requests for static resources directly, without involving ASP.NET. If you do want to track requests for static resources, you would have to pass them all through ASP.NET, which will be kind of a pain. If you only want to track AJAX requests, that's much simpler and shouldn't require anything special for the most part.
All that said, if the only purpose of this is to track page load time, there's far better and easier ways to do that. You can install Glimpse. You can use your browser's developer console. You can use something like Google Analytics. All of these are far preferable to the path you're going down here, for page load statistics.
Write an ActionFilter to do this. There are many examples of this
http://rion.io/2013/04/15/creating-advanced-audit-trails-using-actionfilters-in-asp-net-mvc/
http://blog.ploeh.dk/2014/06/13/passive-attributes/
I personally like Mark Seemann's example more since it clearly defines a nice separation of concerns for the attribute and the filter.
I have this solution (that works), but i would like to know if theres a way to make a loop that checks if the method-name is posted, and if it is -> run the method. Current code:
if (HttpContext.Current.Request["FunctionName"] != null)
{
switch (HttpContext.Current.Request["FunctionName"])
{
case "DoStuff":
DoStuff();
//... etc
Hope you get the idea, otherwise ill elaborate.
Thanks in advance!
You could call GetType().GetMethod(HttpContext.Current.Request["FunctionName"], new Type[]{}) which would return a MethodInfo that you could invoke. I wouldn't though for a few reasons:
The general diciness of do-whatever-the-user-tells-you is high enough that even with the assurance that this was done in a class where every method (including inherited) was safe to run, I'd rather be more active in parsing requests from potentially malicious users.
There'd have to be lot of such methods before the convenience of this outweighed the relative cost, and at that point I'd wonder about the specification of the resource in question. URIs should map to a resource with well defined meanings, rather than including everything but the kitchen sink. There should only be a small number of possible values for the function name anyway.
The title says you're taking this from the query string, which suggests you're reacting to a GET by doing different things. GETs should be "look at" operations, that return the state of the thing looked at. This can certainly involve doing quite a bit (classic example is a search that does a lot of complicated comparisons, possibly against a variety of different sources, but is still a "look at" operation). The query string should not select a choice of actions, that should be done by examining the information POSTed to the resource - or better yet POSTed to the resources with completely different URIs for each sort of operation.
based on the follow up comments I would create context specific handlers rather than one handler to process all generic requests. otherwise integrate a MVC framework into the webforms project and let the MVC framework handle object/method delegation.
What else needs to be validated apart from what I have below? This is my question.
It is important that any input to a site is properly validated:
Textboxes, etc – use .NET validators (or custom code if the validators aren’t appropriate)
Querystring or Form values – use manual validation (casting to specific types, boundary checking, etc)
This ties into the problems which XSS can reveal.
Basically you have to validate any input that someone could potentially tamper with:
Form Postbacks (mainly .NET Controls – these can be validated with .NET validation controls. Also if you have Request Validation turned on on all pages, this reduces the risk )
QueryString Values
Cookie values
HTTP Headers
Viewstate (automatically done for you as long as you have ViewState MAC enabled)
Javascript (all JS can be viewed and changed, so need to ensure no crucial functionality is handled by JavaScript- i.e. always enable server side validation)
There is a lot that can go wrong with a web application. Your list is pretty comprehensive, although it is duplication. The http spec only states, GET, POST, Cookie and Header. There are many different types of POST, but its all in the same part of the request.
For your list I would also add everything having to do with file upload, which is a type of POST. For instance, file name, mime type and the contents of the file. I would fire up a network monitoring application like Wireshark and everything in the request should be considered potentially harmful.
There will never be a one size fits all validation function. If you are merging sql injection and xss sanitation functions then you maybe in trouble. I recommend testing your site using automation. A free service like Sitewatch or an open source tool like skipfish will detect methods of attack that you have missed.
Also, on a side note. Passing the view state around with a MAC and/or encrypted is a gross misuse of cryptography. Cryptography is tool used when there is no other solution. By using a MAC or encryption you are opening the door for an attacker to brute force this value or use something like oracle padding attack to take advantage of you. A view state should be kept track by the server, period end of story.
I would suggest a different way of looking at the problem that is orthogonal to what you have here (and hence not incompatible, there's no reason why you can't examine it both ways in case you catch with one what you miss with another).
The two things that are important in any validation are:
Things you pay attention to.
Things you pass to another layer untouched.
Now, most of the things you've mentioned so far fit into the first cateogry. Cookies that you ignore fit into the second, as would query & post information if you passed to another handler with Server.Execute or similar.
The second category is the most debatable.
On the one hand, if a given handler (.aspx page, IHttpHandler, etc.) ignores a cookie that may be used by another handler at some point in the future, it's mostly up to that other handler to validate it.
On the other hand, it's always good to have an approach that assumes other layers have security holes and you shouldn't trust them to be correct, even if you wrote them yourself (especially if you wrote them yourself!)
A middle-ground position, is that if there are perhaps 5 different states some persistant data could validly be in, but only 3 make sense when a particular piece of code is hit, it might verify that it is in one of those 3 states, even if that doesn't pose a risk to that particular code.
That done, we'll concentrate on the first category.
Querystrings, form-data, post-backs, headers and cookies all fall under the same category of stuff that came from the user (whether they know it or not). Indeed, they are sometimes different ways of looking at the same thing.
Of this, there is a subset that we will actually work upon in any way.
Of that there is a range of legal values for each such item.
Of that, there is a range of legal combinations of values for the items as a whole.
Validation therefore becomes a matter of:
Identify what input we will act upon.
Make sure that each component of that input is valid in its own right.
Make sure that the combinations are valid (e.g it may be valid to not send a credit card number, but invalid to not send one but set payment type to "credit card").
Now, when we come to this, it's generally best not to try to catch certain attacks. For example, it's not so good to avoid ' in values that will be passed to SQL. Rather, we have three possibilities:
It's invalid to have ' in the value because it doesn't belong there (e.g. a value that can only be "true" or "false", or from a set list of values in which none of them contain '). Here we catch the fact that it isn't in the set of legal values, and ignore the precise nature of the attack (thus being protected also from other attacks we don't even know about!).
It's valid as human input, but not as what we will use. An example here is a large number (in some cultures ' is used to separate thousands). Here we canonicalise both "123,456,789" and "123'456'789" to 123456789 and don't care what it was like before that, as long as we can meaningfully do so (the input wasn't "fish" or a number that is out of the range of legal values for the case in hand).
It's valid input. If your application blocks apostrophes in name fields in an attempt to block SQL-injection, then it's buggy because there are real names with apostrophes out there. In this case we consider "d'Eath" and "O'Grady" to be valid input and deal with the fact that ' is significant in SQL by escaping properly (ideally by using an API for data access that will do this for us.
A classic example of the third point with ASP.NET is code that blocks "suspicious" input with < and > - something that makes a great number of ASP.NET pages buggy. Granted, it's better to be buggy in blocking that inappropriately than buggy by accepting it inappropriately, but the defaults are for people who haven't thought about validation and trying to stop them from hurting themselves too badly. Since you are thinking about validation, you should consider whether it's appropriate to turn that automatic validation off and then treat < and > in a manner appropriate for your given use.
Note also that I haven't said anything about javascript. I don't validate javascript (unless perhaps I was actually receiving it), I ignore it. I pretend it doesn't exist and then I won't miss a case where its validation could be tampered with. Pretend yours doesn't exist at this layer too. Ultimately client-side validation is to save the good guys making honest mistakes time, not to twart the bad guys.
For similar reasons, this is best not tested through a browser. Use Fiddler to construct requests that hit the validation points you want to examine. This way all client-side validation is by-passed, and you're looking at the server the same way an attacker will.
Finally, remember that a page with 100% perfect validation is not necessarily secure. E.g. if your validation is perfect but your authentication poor then someone can send "valid" code to it that will be just - perhaps more - nasty as the more classic SQL-injection of XSS code. That hits onto other topics that are for other questions, except that validation as discussed here is only part of the puzzle.
I have to implement a payment gateway for a website I am maintaining, and I haven't done anything like this before. Previously to implement payment processing, the site would build a transaction and send it directly to the payment processor and await a result. Since the site handled the gathering of credit card information, building of the transaction, and the requests/responses, there wasn't much I had to worry about that the previous developer hadn't already covered.
Now that I'm implementing a payment gateway, is there anything I should be checking or verifying?
The way this processor works is, I build a form that has the order id, amount, currency, etc. in hidden fields. That form is posted to the gateway, which will handle the processing, and then post a form back to our server where we can update the shopping cart and complete the order.
The only thing I can think of is a user modifying the form fields before we post them to the gateway. Such as adding a $100 item and changing
<input name="amount" value="100.00" type="hidden">
to
<input name="amount" value="0.01" type="hidden">
So when I receive the post I have to verify that the amount paid for was equal to the amount owed. Is there anything else I am missing? The implementation documentation doesn't even mention a scenario similar to the above, so I'm a little worried I'm missing other things and leaving the site open to further exploits.
I think you'd be better off creating a dedicated web service to handle this '3rd party' conduit architecture you have going on here, your basically playing the middle-man and an HTML form just feels like unnecessary overhead to me, unless it's required to be done that specific way, I'd move to a web service.
That being said, treat it like any client application, don't trust whatever they give you, validate and cleanse the information as necessary before performing the operation.
I would also recommend building or integrating support for logging into your middle ware system, so should a problem arise, you have some way of capturing issues and tracking them for the future, bug fixes, support calls, etc.
It's probably obvious but make sure to validate your order #'s, a user could put anything in there they wanted, again, validate and cleanse the data and log the truly weird situations.
First, I have to agree with Capital G. It would be so much easier to just make a server to server connection than to try and handle form submission through the client browser.
One thing to check: after submitting to the gateway, does the client then initiate the post back to your server, or does the gateway server handle it? If the client initiates it, what prevents them from POSTing to you that the order is complete without ever having gone to the gateway? It sounds like you might need to make a webservice request to the gateway to verify the payment actually went through before accepting the client POST that claims it did.
Could you add a digest to the communication? If you had a shared secret with the gateway you could validate the integrity of information shared even if it passed through the client by including a digest both ways.
Make sense?
Carl
First, I don't think you're implementing a payment gateway. It sounds like you're just using one. If this is wrong, just ignore the rest of this answer, and I'll delete it when I can :)
Using a Payment Gateway from a Simple HTTP Form
Google Checkout -- as one example -- allows you to use an "unsigned cart" like the one you describe. The other option is to post via the web service interface, and do correct error checking etc.. When you submit an order with an HTML form, Google Checkout warns you, the merchant, that the "cart is unsigned" (later in the admin screen). This means that the information in the cart -- especially prices -- is not to be trusted. The fact that the end-user put in their credit card basically vouches for the fact that the transaction is okay with him/her. So you just have to check that the numbers used to arrive at the final totals -- or amount owed, or whatever your business is -- check out. So what you're doing is fine on a low-level.
The reason you should use a web-service submit to the service -- and secure signing of the cart, etc. -- is... What do you do if the numbers are wrong? Call the end-user up and explain the situation? So that's a bit tricky, because you cannot assume fraud. There are many strange reasons for which the cart would be altered without the user actually wanting to scam you.
the company I work for want to use a "hosted payment form" to charge our customers. A question came up on how we can populate the "payment form" automatically with information from one of our other system. We have no control over the hosed payment form, and we have to use IE. Is this possible at all? And if so, how can this be done?
If something is unclear, please let me know...
Assuming that you are essentially embedding the contents of a remote form in a frame/iframe, the you should be able to use some javascript to set values for the fields - field.value = "xxxx".
That solution of course depends on the form remaining the same - any changes to the remote form will require you to update your script.
If you are "handing off" to a remote site (redirect) that post's back to your site when payment is complete, then unless the remote site offers an API / a way of passing request parameters through, then you are going to be out of luck,
Unless your payment gateway allows you to pass through data in a set API (which lots do!), you'd need to take control (and responsibility) for your payment form.
I say responsibility because you would have to prove to your merchant account provider that everything is secure. This will probably incur some security testing fees too.
So check with your merchant gateway first. Lots of systems have the means to accept data from your site and their tech support will be able to give you a straight answer immediately. Otherwise you'd have to switch it over so you process all the data yourself which, just for making things easier, isn't worth it IMO.