Let's say I have a web application that is an auction site. It employs ASP.NET Web API to query the server for business data but SignalR is also used for certain real-time aspects of the site. For example, if User A makes an auction and User B puts a bid on it, User A gets notified in real time that someone has put a bid on his auction. If User C also puts a bid on the same item, User B also gets notified that he has been outbid.
Given this scenario, I'm wondering whether it'd be "correct use of the toolset" to simply use a SignalR hub to call into the service layer to update the appropriate objects in the database as well as pushing the notifications to the appropriate clients. Or, should I use a Web API controller to do the service layer call which performs any db manipulation and get a hub instance through OWIN Context and only use SignalR for broadcasting?
What are the advantages/disadvantages of each approach? I guess the second option is the correct way, but I don't have anything in my mind to support it... it just feels more correct.
I'd also point out that Web API isn't used for no reason - I think that an auction application is such a thing for which dedicated mobile/tablet applications would perfectly make sense to be made and having a way to get data independently from view markup is pretty much what Web API is made for. The same applies to sending data to the server - representing actions as HTTP verbs and urls is something that I like to think of as a sort of a convention, so I guess using SignalR for such actions would kind of break the whole point of using Web API.
Related
Context
I'm looking for a way to tell my users that one of the dependencies of my Asp.Net Core site is down and that therefore the site is currently not available. Preferably using an error page.
Asp.Net Core provides the Health Checks functionality to manage the logic of dependency health checking, but it only provides an endpoint meant for load-balancers.
There is also a kind of dashboard functionality available for the health checks but that is not meant as an error page for end users, it is aimed at administrators.
Why am I looking into this functionality?
I am using Azure Front Door. This product works as a load balancer. It can look at the health status endpoint provided by Asp.Net Core health checks and will take unhealthy backend nodes out of rotation.
However, it does not offer custom error pages and in the case that all backend nodes are down, it will assume that all nodes are healthy. One of the dependencies of my site is an external service that, if it is down, will be down for all instances of my site. It contains e.g. the user accounts that are needed for users to interact with my site. Therefore, I believe I need to implement an error page in the Asp.Net Core site that will show an error page when that external dependency is down.
Suggested solution
One of my ideas would be to have middleware that, when the site is degraded, always returns 503 Service Unavailable or throw an exception. The Asp.Net Core Status Page functionality could then turn that in an appropriate error page.
Question 1
Is this the best architecture? How have other people done this?
Question 2
What is the most practical way to access the current health status?
Technically it is possible to call the HealthCheckService.CheckHealthAsync(...) method directly, but awaiting that method takes some time (especially if one of the dependency services does not respond). Therefore it is not a good idea to make that blocking in the request pipeline.
I could use a Health Check Publisher to cache the health status by publishing it to some custom HealthStatusCache service, but it feels a bit like a workaround. Is this how other people would do it?
I have a C# Azure Web API backend where data is retrieved from a front-end Ionic Mobile App (which is basically an Angular App)
The authorization of users is done via Ionic's cloud service, so they handle the heavy lifting of registering users via FB, Twitter, basic (username/password).
My question is, when I go to call services from my backend API, how can I make sure someone just doesn't read a hardcoded username/password inside of the internal javascript code to access the backend data?
I know it's pretty far fetched, but is there anyway for the API to know the request is actually coming from the app (Android and iOS) and not just from someone trying to insert data and comments from a web browser that is unauthorized?
Since you're calling the API from JavaScript that is available for end users, you can assume that your JavaScript and all the logic/credentials contained within are accessible to all.
There are fairly secure ways around this, and FB/Twitter and their ilk have implemented it (using OAuth). Essentially, on passing credentials to the API, a token is generated, which is then used for subsequent calls to the API instead of the credentials.
You can avoid people randomly firing off 'unauthorized' requests using nonces which are generated when you render the form, and can be used only once to submit the form in question. You can then time-limit the validity of the nonce on the API end. Unfortunately, it's not foolproof, but this will limit the damage of any sort of 'brute-force' attack that you might get.
Again, with any shared 'secret' (that would guarantee the origin of requests), you have to assume that anyone with enough willpower will be able to extract it from apps, thus any method you implement here will be 100% foolproof. Probably the best you can do is have a shared secret generated for each user on each device.
Short answer: you can't.
Long answer: you can (and must) validate the behaviour of a client but not the client itself.
For example we can take a look on Pokemon Go: after a few hours there were bots able to play, after a couple of weeks Niantic started assuming Machine Learning software engineer and encrypt its API using unknown6 algorithm for stopping the bots, but after a few days of hard working the bots came again online.
You can use all the secure method of this universe (whit an high expense) but if someone (that have good knowledge of software engineering) want emulate your client at the end I will reach his objective
I am creating an ASP.NET MVC website that uses a 3rd party API (web service) as a data source. It is read-only, and to date has been accessed by individuals using desktop applications (most in C#). I would like to consume this API using a web site in order to centralize information and give users historical information, automate certain repetitive tasks, and more easily allow sharing of information among users.
The desktop clients today experience throttling, and if you make repeated requests to the API using a client your IP will be throttled and/or banned. I think that if I made the requests to the API from my website, its IP would be banned the moment it saw any significant use.
Let's assume that I cannot work something out with the API owners. Probably the easiest way to work-around this problem is to do all of the API access using AJAX. When the user visits the website, he makes the requests to the API using AJAX then turns around and posts them to my website. I don't like this idea for multiple reasons-- first, it'll be slow, and second, I could not guarantee that the data sent to my website was genuine. A malicious user could, for whatever reason, send me bad information.
So I thought a better idea would be to establish a man-in-the-middle. The user would still be forced to make the AJAX request, but they would make it to a proxy or something else that I control, which would then forward it on to the real API and intercept the response so I could be a little more certain that the data I retrieved was genuine.
Is it possible to create such a "proxy"? What would it entail? I would like to do it using a .NET technology but I'm open to any and all ideas.
EDIT: It seems I caused confusion by using the word "proxy." I don't want a proxy, what I want is a pass-through that allows me to intercept the response from the API. I could have the client make the request and then subsequently upload it, but I don't want to trust client, I want to trust the API.
Let me explain this in shorter form. There is a client on a user's machine which can make a request to an API to get current information. I would like to create a website that does the same thing, but I am considering the possibility that the API web service may notice that while previously it was receiving ten requests for ten users from ten different IPs, it is now receiving ten requests for ten users from one IP and block that IP seeing it as a bot even though every request was kicked off by a user request just as it had previously. The easiest way to workaround this is to have the user make the request and then upload the response to me, but if I do that I am forced to blindly accept data from a client which is a huge no-no for any website in any situation. If instead I can place something that forwards the request along to the API preserving the IP of the user but is also capable of intercepting the response thereby proving that the data is authoritative, that would be preferred. However, I can't think of a software mechanism to do this-- it seems like it would need to be done at a different layer.
As for legal concerns, this is a widely used API with many applications and users (and there are other websites I have found using the API), but I was unable to find any legal information like terms of service beyond forum postings in the API's tech support section amounting to "don't make repeated requests, obey our caching instructions" etc. I can't find anything that would indicate this is an illegal or incorrect use of the web service.
You could implement your proxy. It wouldn't need to be AJAX though, it could just be a normal web page request that displayed the API results if you wanted.
Either way, in .Net you could do it using ASP.Net MVC. If you wanted AJAX, use a Web API controller action that implements the source API, if you want a web page, just use a regular MVC controller/action.
Inside your controller, you would just make a web request to the source, passing through the parameters.
In order to avoid throttling, you could cache the results of each request you make from your server (using the normal ASP.Net cache), so that if another client attempted to make the same request, or a similar one maybe, you could return the cached results instead of making another request to the API.
You would have to determine how long the results should be cached for, depending on how up to date the data needs to be in your client. E.g. For weather data, caching for an hour would seem OK. For more fast moving data it would have to be less. You have to strike a balance between avoiding throttling and keeping data fresh.
You could also intelligently fetch more data than you need at each request and then filter the result set that you return to your client. This could give you a better cache hit rate.
I am new to MVC and Web Services.
According to my project, I have to show listing data at ViewLayer.
The listing data which I have to show will come from other region via its web service server.
It means that I have to communicate with these web server which is separate with my web application server.
Moreover, my web application have to update some of the data and send this updated data to there web service server again.
That is my project requirement.
So I have searched every possible solutions. Then I found one at stackoverflow.com. According to this, I found that I need to use $.ajax { url: ... } style which I think I need to fully rely on view layer.
Then I had found another solutions which I think I need to fully rely on Controller Layer. I mean I have to write all the code which need to talke with web services only at controller layer.
As I am junior to MVC, I could not decide which one is suitable for me.
Every suggestion will be really appreciated and welcome your any suitable solutions more.
As with all things development - it depends!
If you own the services, they hang off of the same domain, and you're mostly focused on rendering the results of the web service call to HTML, the client-side AJAX calls work well.
If they're on a different domain (or even subdomain), or you want to do more than "just call" the service (e.g., clean up the response, add some tracking, transform it in some way) then handling the web service call via the controller is probably the way to go. You can also easily add server-side caching and logging with this option.
You could use the Unobtrusive Ajax Helpers in MVC3
http://bradwilson.typepad.com/blog/2010/10/mvc3-unobtrusive-ajax.html
I am relatively new to the WCF world so my applogies for the newbie question. I am currently designing a layer of WCF services. One of them is an authentication service, so I came up with the following authentication mechanism:
IUserService.TryAuthenticateUser(string username, string password, out string key)
Basicly the user tries to authenticate and if successful - he/she receives a sessionkey/securitykey/whateverkey... the key is then required for every other "WCF action" e.g.
IService.GiveMeMyFeatures(string key);
IService.Method1(string key);
This mechanism looks extremely intuitive for me and is also very easy to implement, so what bothers me is why I cant find similar WCF examples? This unique key (which is practically a session key with wcf-side expiration and all) can then by used from the various applications, according to the application's architecture: for ASP.NEt it can be stored in a cookie, for Winform/WPF/Mobile I guess it can be stored in the form-class in a field and so on...
So here comes question number 1: What do you think of this method?
I also read, that I can use the build-in ASP.NET Authentication Services (with membership providers etc... if I understood correctly). From architecture point of view I dont really like this method, because when authenticating from an ASP.NET page the workflow will be like this:
ASP.NET -> WCF -> ASP.NET Authentication Service -> Response
In this scenario one could also bypass the WCF layer and call the auth. service methods directly from the asp.net page. I know that by going thru the WCF layer for every authentication request I will lose some performance, but it is important for me to have a nice, layered architecture...
And here is question number 2: What are the advantages/disadvantages of this method over the first one, and why is it so popular, when from architecture point of view it is kinda wrong?
I also read, that I can send user credentials for every WCF method call and use the built-in mechanism to authenticate and respond properly to the request.
Q3: What do you think if this method?
And to sum up - obviously there are many authentication methods, but which one do you think is best and most generic (considering that the WCF services will be called from asp.net/wpf/mobile/etc...)?
Thanks is advance :)
The reason you can't find examples it's not best practice - it's turning something that should be stateless, web services, into something stateful, and something that will not load balance well at all.
As web services already have standard username and password facilities, supported by almost every SOAP stack (excluding Silverlight) that's the way to go. You can use the standard .NET role based security model to protect your methods with this approach as well.