More of a design/conceptual question.
At work the decision was made to have our data access layer be called through webservices. So our website would call the webservices for any/all data to and from the database. Both the website & the webservices will be on the same machine(so no trip across the wire), but the database is on a separate machine(so that would require a trip across the wire regardless). This is all in-house, the website, webservice, and database are all within the same company(AFAIK, the webservices won't be reused by another other party).
To the best of my knowledge: the website will open a port to the webservices, and the webservices will in turn open another port and go across the wire to the database server to get/submit the data. The trip across the wire can't be avoided, but I'm concerned about the webservices standing in the middle.
I do agree there needs to be distinct layers between the functionality(such as business layer, data access layer, etc...), but this seems overly complex to me. I'm also sensing there will be some performance problems down the line.
Seems to me it would be better to have the (DAL)assemblies referenced directly within the solution, thus negating the first port to port connection.
Any thoughts(or links) both for and against this idea would be appreciated
P.S. We're a .NET shop(migrating from vb to C# 3.5)
Edit/Update
Marked Dathan as answer, I'm still not completely sold(I'm still kind of on the fence, though leaning it may not be as bad as I feared), he provided a well thought out answer. I appreciated all the feedback.
Both designs (app to web service to db; app to db via DAL) are pretty standard. Web services are often used when interfacing with clients to standardize the semantics of data access. The web service is usually able to more accurately represent the semantics of your data model than the underlying persistence store, and thus helps the maintainability of the system by abstracting and encapsulating IO-specific concerns. Web services also serve the additional purpose of providing a public interface (though "public" may still mean internal to your company) to your data via a protocol that's commonly accessible across firewalls. When using a DAL to connect directly to the DB, it's possible to encapsulate the data IO concerns in a similar way, but ultimately your client has to have direct access to the database. By restricting IO to well-defined semantics (usually CRUD+Query), you add an additional layer of security. This isn't such a big deal for you, since you're running a web app, though - all DB access is already done from trusted code. The web service does provide an increase in robustness against SQL injection, though.
All web service justifications aside, the real questions are:
How much will it be used? The website/web service/database format does impose slightly higher overhead on the web server - if the website gets hammered, you want to consider long and hard before putting another service on the same machine. Otherwise, the added small inefficiency is probably not a big deal. On the other hand, if the site is getting hammered, you probably want to scale horizontally anyway, and you should be able to scale the web service at the same time.
How much to you gain? One of the big reasons for having a web service is to provide data accessibility to client code - particularly when multiple possible application versions need to be supported. Since your web app is the only client to use the web service, this isn't a concern - it's probably actually less effort to version the app by itself.
Are you looking to expand? You say it probably won't ever be used by any client other than the single web app, but these things have a way of gaining in size. If there's any chance your web app might grow in scope or popularity, consider the web service. By designing around a web service, you're already targeting a modular, multi-host solution, so your app will probably scale with fewer growing pains.
In case you couldn't guess, I'm a web service fan. But the above are also my honest (if somewhat biased) opinions on the subject. If you do go the web service route, be sure to make it simple - keep application logic in the app and service logic in the service, and try to draw a bright line between them when extending the two. And do design your service for efficiency and configure the hosting to keep it running as smoothly as possible.
This is a questionable design, but your shop isn't the only one using it.
Since you're using .NET 3.5 and running on the same machine, you should use WCF with the netNamedPipesBinding, which uses binary data transfer over named pipes, only on the same machine. That should mitigate the performance issue somewhat.
I like the idea because it gives you flexibility. We use a very similar approach because we can have more than 1 type of database storing our data (MSSQL or Oracle) depending on our customer install choices.
It also gives customers the ability to hook into our database if they choose not to use our front end web site. As a result we get an open API for little to no extra effort.
If speed is your most critical issue than you have to lessen your layers. However in most cases the time it takes for your web Service to process the request from the database does not add allot of time. (This is assuming you do your Web Service Layer Correctly, you can easily make it slow if you don't watch it.)
Related
I am currently involved in a simple to medium complex IOT project. The main purpose of our application is gathering data from our devices and analyzing that data as well as calculating statistics.
On the server side we run a MVC application. Up until now we used Hangfire to schedule the calculations. Hangfire is an amazing tool for scheduling emails and other simple stuff, for more advanced things it's too slow. The calculations can take up a lot of time and are processor-intensive (we are trying to optimize them though), so we need to call them in a background task, a simple API call won't be enough.
I thought about splitting the application into multiple parts, the website, the core and a windows service.
The problem is, I never tried that before and I have no idea what the best practice is to achieve that kind of thing. I searched for examples and articles, but all I found were suggestions to use Hangfire and/or Quartz.NET.
Does anyone have any resources on what the best practice is to build a MVC application, a Windows service and how they could communicate (probably through a queue)? What is the best practice in such a situation?
Although there may be many different possible ways to connect a site with a windows service, I'd probably chose one of the following two, based on your statements:
Direct communication
One way of letting your site send data to your backend windows service would be to use WCF. The service would expose an endpoint. For simplicity's sake this could be a basicHttpBinding or a netTcpBinding. The choice should be made based on your specific requirements; if the data is small then basicHttp may be "sufficient".
The advantage of this approach is that there's relatively little overhead needed: You'll just have to setup the windows service (which you'll have to do anyway) and open a port for the WCF binding. The site acts as client, the service as server. There's nothing special with it, just because the client being a MVC site. You can take almost any WCF tutorial as a starting point.
Note that instead of WCF you could use another technology like .NET Remoting or even sockets just as well. Personally, I often use WCF because I'm quite used to it, but this choice is pretty opinion based.
Queued communication
If reliability and integrity is crucial for your project, then using a queue might be a good idea. Again: depending on your needs, there may come diffeent products into consideration. If you don't need much monitoring and out-of-the-box management goodies, then even a very simplistic technology like MSMQ may be sufficient.
If your demands to the aforementioned points are more relevant, then maybe you should look for something else. Just recently I got in touch with Service Bus for Windows Server (SBWS). It's the Azure Service Bus's little brother which can be used on premises locally on your windows server. The nice thing about it is, that it comes at no extra charge as it's already licensed with your windows server licence.
As with the first point: MSMQ and SBWS are just two examples. There may be a lot of other products like NServiceBus, ZeroMQ or others usable, you name it.
We have multiple, decoupled systems. These systems provide public APIs which are consumed by mobile apps and other client applications.
We now need these systems to internally communicate with each other during some calls (for whatever reason). I am torn between the possible solutions.
Create secure API calls
Hosting an internal ASP.NET Web API for each system so that they can expose data and functionality to each other.
Advantage: Low coupling. As long as the API call structure doesn't change, the systems can evolve on their own.
Disadvantage: Very slow. An HTTP request is deathly slow when compared to a database query. This could severely affect the speed of our public calls.
Reference repository projects between solutions
Sharing and include the Repository (data access) codebase between multiple projects.
Advantage: Very fast. Retrieving the relevant data will be extremely fast as it would simply be a database query.
Disadvantage: Highly coupling. A rebuild and release on system A would likely mean the same for system B as they will share common code.
Neither of these solutions seem ideal.
Is this a job for WCF?
Very slow. An HTTP request is deathly slow when compared to a database query. This could severely affect the speed of our public calls
Well. no. For instance the RavenDb client API is built using HTTP. All db queries in it are executed over HTTP. You should also know that most database engines communicate with the client over the network (using sockets or named pipes).
What I'm saying is that the overhead that HTTP introduces is marginal.
The bright side with HTTP is that it enables you to cache queries at any time with little effort as there are dozens of ready-to-use caching solutions for HTTP. Same thing goes for load balancing when your user base grows.
Hence I would use HTTP for this.
We are using WCF for this purpose. Typically that is the way to handle the scenario you mentioned, I think, unless you have very specific requirements.
If you use a messaging broker in between, a (WCF)services solution can scale very well.
Weigh your priorities with the differences between WCF and Web API. See this
Basically, I have a new desktop application my team and I are working on that will run on Windows 7 desktops on our manufacturing floor. This program will be used fairly heavily as it gets introduced and will need to interact with our manufacturing database. I would estimate there will (eventually) be around 100 - 200 machines running this application at the same time.
We're lucky here, we get to do everything from scratch, so we define the database, any web services, the program design, and any interaction between the aforementioned.
As it is right now, our legacy applications just have direct access to a database, which is icky. We want to not do that with the new application.
So my question is, how do I do this? Vague, I know, but basically I have a lot at my disposal here, and I'm not entirely sure what the right direction to go is.
My initial thought, based on what I've perceived others doing, is to basically wall off the database by using webservices. i.e. all database interactions from the floor MUST occur through the webservices, providing a layer of security by doing much of the database logic behind closed doors. Webservice calls are then secured to individual users via Active Directory.
As I've found though, that has some implications of its own... We have to abstract the data before it reaches the application. There's still potential for malicious abuse by using webservice calls repeatedly to ruin or spam data. We've looked at Entity Framework and really like what it provides, but as best I can tell, that's going to be unavailable by the time we're at the application level in this instance.
It just seems like I can't come to a conclusion on what is "right". So, what is right?
WebServices sounds like a right approach. Implementing a SOA-oriented layer on the webservices layer gives you a lot of control over what happens to the data at the database server.
I don't quite share your doubts about repeated calls doing any damage - first you can have an audit log of every single call so that detecting possible misuses is obvious. But you also could implement a role based security so that web service methods are exposed to users in roles, which means that not everyone will be able to call just any method.
You could even secure your webservices with forms authentication so that authentication is done against any datasource, not only the active directory.
And last thing, the application itself could be published as a ClickOnce application so that it is downloaded and executed from the web page and it automatically updates itself just as you publish new versions.
If you need some technical guidance, I've blogged on that years ago:
http://netpl.blogspot.com/2008/02/clickonce-webservice-and-shared-forms.html
My suggestion since you are greenfield is to use an API wrapper approach with Servicestack.
Check out: http://www.servicestack.net/ServiceStack.Northwind/
Doing that you can use servicestack authentication, abstract away your db layer (because you could move to a different DB provider, change its location, provide queues for work items etc...) and in time perhaps move your whole infrastructure to an internal intranet app.
Plus Servicestack is incredibly fast, interoperable with almost any protocol you through at it, and provides for running it through MONO, so you are not stuck with a MS backend that could be very expensive.
My two cents. :)
First of all this question is not appropiate for StackOverflow, you might get close-votes really quickly.
Second, You may want to have a look at WCF RIA Services for this.
These will allow you to create basic CRUD operations for all your entities, and stuff like that.
I never used this myself, no I'm not sure what the potential issues might be.
Otherwise, Just do what we did:
Create generic (<T>) interfaces and services and contracts and everything. This will allow you to adapt your CRUD functionality in your Services, DAOs, ViewModels and such to any entity type.
For web based applications, why doesn't PHP need middleware to run - yet languages like Java, C#, etc do?
UPDATE:
Re-worded: Why doesn't PHP need a middle tier, or business layer separating it from the database, whereas the others do.
Assuming that by the term "middleware" you mean "a middle tier", or "a business layer" the answer is that none of them need it.
For example, there is nothing to stop you in C# (or more correctly, on the .Net Framework "stack") from writing code in web pages that directly accesses the database. Indeed, lots of prototypes start out this way.
The issue here is more around good practice - it is generally considered A Bad Thing(tm) to write web pages (sticking with the same example) that directly access the database and the reasons for this are many. Testability, security, good decoupled code - all these require you to separate your code out, and having several tiers is a natural way to do this.
Why do you not see as much of this with PHP? I think Jeff's latest blog post covers this well :)
I'd go as far as to say that C# (the language), .Net (the Framework), ASP.NET (especially ASP.NET MVC) and much of the documentation and tutorials encourage you to do the right thing and not punch a whole from the web page through to the database.
But there isn't actually anything stopping you from doing it.
The use case that can appear is the following:
You have multiple web apps serving data that is stored in a database. Let's say the web apps are:
Front-end for your desktop web experience
Front-end for your mobile web experience
Front-end for your APIs that you expose to developers
Let's say each front-end accesses the database directly. Then, your eng team decides that the current database should be replaced. Now you have the problem of rewriting code in every front-end.
If you had a middle tier to abstract the actual data store, you'd only re-write one layer of code. Additionally, having unit tests for that middle tier would ensure that the way the middle tier exposes data remains uniform, regardless of the data storage platform.
I've been developing in MS technologies for longer than I care to remember at this stage. When .NET arrived on the scene I thought they hit the nail on the head and with each iteration and version I thought their technologies were getting stronger and stronger and looked forward to each release.
However, having had to work with WCF for the last year I must say I found the technology very difficult to work with and understand. Initially it's quite appealing but when you start getting into the guts of it, configuration is a nightmare, having to override behaviours for message sizes, number of objects contained in a messages, the complexity of the security model, disposing of proxies when faulted and finally moving back to defining interfaces in code rather than in XML.
It just does not work out of the box and I think it should. We found all of the above issues while either testing ourselves or else when our products were out on site.
I do understand the rationale behind it all, but surely they could have come up with simpler implementation mechanism.
I suppose what I'm asking is,
Am I looking at WCF the wrong way?
What strengths does it have over the
alternatives?
Under what circumstances should I
choose to use WCF?
OK Folks, Sorry about the delay in responding, work does have a nasty habit of get in the way sometimes :)
Some clarifications
My main paint point with WCF I suppose falls down into the following areas
While it does work out of the box, your left with some major surprises under the hood. As pointed out above basic things are restricted until they are overridden
Size of string than can be passed can't be over 8K
Number of objects that can be passed in a single message is restricted
Proxies not automatically recovering from failures
The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it.
Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembly, but it's full of rubbish and certain code generators choke on it.
I know the world moves on, I've moved on a number of times over the last (ahem 22 years I've been developing) and am actively using WCF, so don't get me wrong, I do understand what it's for and where it's heading.
I just think there should be simpler configuration/deployment options available, easier set-up and better management for configuration (SQL config provider maybe, rather than just the web.config/app.config files).
I use WCF all the time now and I share your pain. It seems like it was grossly over-engineered, but we are going to be stuck with it for a long, long time so I'm trying to learn it.
One thing I am certain about, XML sucks. I've had nothing but problems using XML to control it and have since switched to handling everything via code.
The concerns you listed were:
Size of string than can be passed can't be over 8K
Number of objects that can be passed in a single message is restricted
Proxies not automatically recovering from failures
The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it.
Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembly, but it's full of rubbish and certain code generators choke on it.
here's my take:
(1) addressed a valid concern that customers had with ASMX. It was too wide-open, with no way to easily control it. The 8k limit is easily lifted if you know where to look. I guess you can count that as a surprise, but it's more of a one-time thing. Once you know about it, you can lift it and be done with it forever, if you choose.
(2) is also configurable.
(3) is known, but there are boilerplate ways to work around this. The StockTrader code for example, demonstrates a proven pattern. You can re-use the code in your own app. Not sure if this is fixed in WCF for .NET 4.0. I know it was an open request.
(4) The config is a beast. This is a concern for a lot of people. The problem here is that WCF is so flexible, and config of all of that flexibility is exposed through xml files. It can be overwhelming. An approach that seems to work is to take it in small bites, as you need it.
(5) I don't understand.
I vastly prefer ASP.NET MVC and Web API over WCF. If I had to summarize WCF to a developer who was just being introduced to it, I would say, "WCF is a well-meaning attempt to replace over-engineered, Java EE style RPC development." Unfortunately, many of the decisions made require you to become an expert in configuring low level, unimportant items (message sizes, timeouts, uninteresting protocol elements, etc.) while abstracting absolutely critical pieces (URL design, parameter serialization, response serialization, etc.). The difference in productivity and aggravation between teams I know using WCF vs. Web API is night and day.
To come clean a little: I have always hated the core concept of .NET Remoting. I feel that developers need a thorough understanding of the resource structure of their application and how these resources are serialized. Furthermore, the use of the "POST" verb for simple data retrieval is worrisome in a read heavy application that needs to scale.
I'll address the rest of your issues after clarification. In the meantime, I can address your question on when you should choose to use WCF: always.
WCF is the replacement for the old ASMX technologies, including WSE. It is also the replacement for .NET Remoting. It is the only technology upon which high-level communications features in .NET will be based for the forseeable future.
For example, consider Windows Azure. It was not inevitable that the new concept of "cloud computing" would have its communications aspects covered by WCF. Yet, WCF was flexible enough to be extended to cover those cases, with very little change in code.
If you're having trouble with WCF, then you'd do well to make sure Microsoft knows about it. WCF is the present and future of web service and other service-oriented development in .NET, so they've got a very strong incentive to listen to you and resolve your pain points. Either contact them directly through Connect, or ask questions here on SO (tag with WCF, please), and a lot of people will help you.
Biggest advantage of using WCF from a programmer's point of view: separates the definition of exposed services (operations, contracts, etc.) from the protocol's specific details, unlike ASMX where you expose a class as a web service directly in the code using attributes. Using a real example of mine: we where able to easily switch the transport protocol between web services and named pipes, whatever suited better the deployment and performance needs, without changing a line of code.
WCF is intended to SOA methodologies. Work professionally using it is a nightmare. I delivered a SOA solution using WCF as tool and hell, hundreds configurations and hidden tips! My past distributed solution using old style Web Services and Remoting were more stable. I've spent days working out the solution for the error "The underlying connection was closed: An unexpected error occurred" which makes no sense to happen for one method among 4 in the same contract. I'm very disappointed. It took me back through time where .net was first introduced with lots of promises and when we got hands on, hell, log problems came up!
To address the problem of maintenance nightmare of application config, some standard like UDDI or WS-Discovery exist, WS-Discovery will be supported by WCF in .NET 4.0.
Keeping the configuration of the
interfaces in code rather than moving
to explicitly defined interfaces in
XML, which can be published and
consumed by almost anything. I know we
can export the XML from the assembley,
but it's full of rubbish and certain
code generators choke on it.
Can you be more explicit ? I think you are talking about service behavior configured in code.
You can easily code behavior extensions to configure what your are talking about in config file instead of code BUT I think that if microsoft didn't do that there is a good reason.
For example a service with this behavior :
[ServiceBehavior(InstanceContextMode=InstanceContextMode.PerCall, ConcurrencyMode=ConcurrencyMode.Single)]
The implementation knows that the instance is not shared between multiple thread so it's developed differently than :
[ServiceBehavior(InstanceContextMode=InstanceContextMode.Single, ConcurrencyMode=ConcurrencyMode.Multiple)]
In this case the service implementation should take care about concurency problems.
The implementation is coupled with the attribute ServiceBehavior, so moving this behavior in a XML file is not a good idea.
What if you can change a InstanceContextMode.PerCall service to a InstanceContextMode.Single service inside the config file ? You break the application !
Looking at how you mention XML and SQL, you are using WCF to build a web application or an actual web service (service on the Web, and not just SOAP exchange).
It helps thinking about WCF as a replacement for .NET Remoting (or DCOM, CORBA etc), which also happens to support web services as one of the transports. Interfaces declared in assemblies, behavior of proxies, certain configuration options and other aspects of the framework that look unnatural and complicated from perspective of web apps - actually do work out of the box for DCOM-style systems of distributed objects.
To answer the question: no, you are not missing anything and using WCF for web applications is complicated, because WCF is not a framework for building web applications. Probably such framework can be built on top of it, but I would hate to see WCF itself changed to move into web realm.