I'm trying to come to grips with the search contract for Modern UI applications. In my particular case the items to search for come from a web based service so I'm hesitant to pull them all over the web and then let the user search the results for potentially a single match. My question is, how shall I go about that? Preferably, I'd just hook into the QuerySubmitted event and hit the web service from there, using the String from e.QueryText. Is that considered good practice?
If all you want is a simple server side search, then yes, hooking the QuerySubmitted event and passing the query text to your service is fine. This assumes, of course, that your service supports that kind of lookup (i.e., it has a GetProductsByText rather than just a GetAllProducts).
Things get trickier if you want to use autocomplete and provide recommendations/suggestions to the user while they are typing by handing the SuggestionsRequested event. In that case, start by looking at the Search Contract Sample for an example of how to handle that (in addition to being a good resource for understanding how tow work with the Search contract in general).
You can even have a loot at https://www.simple-talk.com/content/print.aspx?article=1716 to have a better picture..!
Related
I am developing a custom Steam bot from scratch that will react to numerous callbacks emitted by Steam, like OnConnected, OnTradeOfferReceived etc. The callbacks contain parameters like IDs or data.
I wish to give the user freedom to define how should the system react when a specified callback is received.
This can be easily solved by forcing the user to manually program the "reacting" parts, but I really wish to avoid that, because a big part of the possible user base are not programmers at its slightest.
The already existing SteamBot on GitHub does this, leading to questions like "how to build SteamBot.sln".
I thought of a GUI for specifying conditions and executing actions if the conditions are true, but I fail to come up with how to parse them in code without going through each and every option.
By actions, I mean replying to a trade offer, sending a chat message to someone, adding an item to a live trade etc.
Maybe the GUI should generate the actual code (based on user's input) and recompile the bot? Any help or suggestions would be appreciated.
While going through MVC concepts, i have read that it is not a good practice to have code inside 'GET' action which changes state of server objects( DB updates etc.,).
'Caching of return data' has been given as a reason for this.
Could someone please explain this?
Thanks in advance!
This is by HTTP standard. The GET verb is one that should be idempotent and safe.
9.1.1 Safe Methods
Implementors should be aware that the software represents the user in
their interactions over the Internet, and should be careful to allow
the user to be aware of any actions they might take which may have an
unexpected significance to themselves or others.
In particular, the convention has been established that the GET and
HEAD methods SHOULD NOT have the significance of taking an action
other than retrieval. These methods ought to be considered "safe".
This allows user agents to represent other methods, such as POST, PUT
and DELETE, in a special way, so that the user is made aware of the
fact that a possibly unsafe action is being requested.
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects, so
therefore cannot be held accountable for them.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html
Browsers can cache GET requests, generally on static data, like images or scripts. But you can also allow browsers to cache GET requests to controller actions as well, using [OutputCache] or other similar ways, so if caching is turned on for a GET controller action, it's possible that clicking on a link leading to /Home/Index doesn't actually run the Index method on the server, but rather allows the browser to serve up the page from its own cache.
With this line of thinking, you can safely turn on caching on GET actions in which the data you're serving up doesn't change (or doesn't change often), with the knowledge that your server action won't fire every time.
POSTs won't be cached by the browser, so any POST is guaranteed to make it to the server.
Ignore caching for a moment. Another way of thinking about this is that search engines will store HTTP GET links during their indexing/crawling process, therefore they will show up in search results.
Suppose if your /Home/Index is implemented as GET but it lets say deletes a row in your Database, every time this link shows up on a search engine and somebody clicks it, you will have a delete row, and soon you have a lot deleted rows.
The HTTP spec states that GET and HEAD are expected to be idempotent, ie. they should not change server state.
One practical aspect of this, is that search robots will issue GET against any link to your site they know of. If such a GET changes user data it was not meant to change, you are in trouble.
Being idempotent has the added benefit that clients could be able to cache the result of a GET (use HTTP headers to control this).
In my application we have multi-lingual language strings which are stored in custom tables, as the user can edit, delete, import new languages etc... via a UI
Currently, what I'm doing is at the beginning of each request is. I'm going off and getting all the language strings (From our database) for the currently selected language and sticking them in a dictionary.
I then have a Html Helper extension method which I use in the razor views (See below), which fishes in the dictionary I got at the beginning of the request to pull out the correct language based on the key supplied in the helper.
Html.LanguageString("MyLanguage.KeyHere")
Now this works fine. However, as the application is getting bigger. We are getting more and more language strings. It's not an issue right now, as its still very fast as there are only around 200 strings to get.
But this also means I'm getting all of them, even if a page has say one on it. I'd ideally like a way of processing the LanguageString("")'s before hand and doing a query to just get those that are needed at the beginning of the request? Or maybe my own linq based language that can be processed and product a more efficient call.
I'm looking for some advice on how to do this. As I'd like the application to be as efficient as possible. Any advice, help, tips are greatly received. Thanks.
I'd suggest caching language strings on the application basis rather than fetching them for every request. For example, this can be done by maintaining a static dictionary and invalidating the cache only when the user makes changes to these strings. This will make your application more responsive as well as save you from implementing (imho) rather more complex and not necessarily efficient technique of loading this data on-demand.
As a side note I'd add the following: it's usually a good practice to address these kinds of problems when they arise (rather than fixing something that is not broken) and focus on more important things. I totally agree that performance implications of a given solution must always be taken into consideration, I'm just saying that premature optimizations are not always a good idea.
I'm trying to come up with a good way to have a WCF web service construct an equation that can then be passed back to a client, which would then evaluate the equation by plugging the numbers into the appropriate locations.
We have a form that has numerous types of fields, but all of which are ultimately used to calculate section subtotals, and then a final total. We're trying to keep to SOA practices, so we don't want the javascript or UI C# to make any decisions about how calculations should be done, but we want to try to avoid having to hit up the web service every single time a field on the form changes. So we're trying to come up with a way to have the calculation equation still dictated by the web service, but then provided back to the client in such a way that it can then be evaluated and have the appropriate properties plugged into it. We can even use the javascript eval() for this.
The other complication to this is that we may have 3rd parties that choose to have all calculations handled by the web service, so they would just pass a request object and let it determine the amounts every time. Our thoughts for this is to basically have a second operation contract that would utilize the first to determine the appropriate equation, and then basically just evaluate it, similar to how our client UI code would. I was thinking of having the equation be some kind of array which would include arithmetic enumerations between property names which could then be serialized if we want, but the whole thing is still very much a work in progress, and I wanted to put all this out there early on to see if I could get some opinions.
Thanks in advance for any advice/suggestions/criticism you may be able to provide.
I apologize in advance for the generic nature of my question, but I was unable to find any helpful advice from people trying to do the same thing as me on the web. Let me describe my scenario:
I am providing end users/designers of a website the ability to customize their views by storing the views (using Razor) in the database. I have all of this working, but my question is the following; From a security standpoint, how can I ensure and enforce that unwanted code doesn't get executed in the user-defined view? There are two basic approaches that I think will work conceptually, but am not sure which one is more possible or feasible.
Option 1: Create a validation method in the administration tool that allows the user to input the view code. This would need to either take a whitelist or blacklist approach to what is allowable or not.
Option 2: Prevent unwanted code from being able to execute when rendering of the view occurs.
As a quick example of something that would need to be blocked, we wouldn't want to allow access to read or write files, access any data access functions, or even access configuration settings, etc. in the web.config. There will likely be a decently-sized list of things that probably shouldn't be allowable, but I'll need to sit down and try to think of as many security-related concerns as possible.
My question then is, which method would be the best bet? Also, can any direction be provided on how to go about either? I thought I might be able to make trust-level based change which would be Option 2, but couldn't find any way to make that work in a per-view based manor (the administration code is allowed to execute whatever it wants). I'm thinking Option 1 will end up being the best bet and I'll have to check for the input of certain framework functions that shouldn't be allowed. Does anyone have any experience doing anything like what I'm trying to do? ANY feedback is much appreciated!
This would be extremely difficult.
You could run the the template through the Razor preprocessor, then use Roslyn (still in early beta) to parse the generated file and look through all method calls (or constructors) and return an error if it calls something you don't like.
I strongly recommend that you use a whitelist for that, since the .Net framework is big enough that you are bound to overlook something in a blacklist.
However, I would instead recommend that you not use Razor at all and instead use a templating engine that does not allow real C# code.