Lets say I need to call multiple services that insert some records with EF in the same transaction which are for inserting Person and Unit.
I'm uncertain for whether I should create a new operation contract named AddPersonAndUnit and use TransactionScope in that method and calling it from the client OR don't create any additional method and just use TransactionScope in the client (ASP.NET MVC client) and call AddPerson and AddUnit which are already existing.
For single responsibility and moving all business logic into service layer, I think defining an extra method in service layer and calling it from the client seems a better choice but on the other hand just calling these multiple methods from the client requires less effort.
What do you think with the respect of a good design choice? Do you think dealing with the transactions in client is "bad"? Is it worth it to create a new method for this?
Based on your question, you're having two projects (ASP.NET MVC & WCF) and you want to make sure Person and Unit are being inserted/updated transactionally
Taking those two as inputs
Let's do quick analysis about option 1 (Use TransactionScope in client code (proxy class))
Efforts needed
Modify Service Contract classes (OperationContract and ServiceBehavior) to support transaction
Decorating existing methods (OperationBehavior) to suppor transaction
Modify binding config
Re-config client binding configuration
PROS (CLAIM TO BE):
Requires less efforts => As described in point 1., in my opinion, it's difficult to say this method requires less efforts to implement unless everything was is in place
CONS:
Since both application code (MVC) and service code (WCF) are being controlled fully by you, you can make sure that you ALWAYS use the transaction when insert/update Person and Unit. However, I would prefer to make it as black-box service, so that I could expose it to other 3rd client or give the service to another developer to use without worrying about data inconsistency (what if they forgot/intentionally skip the transaction?). Even though you can force the client to ALWAYS use transaction when calling the web services (by using TransactionFlowOption.Mandatory option), I think the good practice is only expose minimal service for client to use
Single Responsibility is also another concern
Duplicate codes (you will need to copy the code over and over every time you want to insert Person & Unit)
Hope it helps,
Sorry for bad English
Related
I am using .NET 4 to create a small client server application for a customer. Should I create one giant service that implements many contracts (IInvoice, IPurchase, ISalesOrder, etc) or should I create many services running one contract each on many ports? My questions specifically is interested in the pros/cons of either choice. Also, what is the common way of answering this question?
My true dilemma is that I have no experience making this decision, and I have little enough experience with wcf that I need help understanding the technical implications of such a decision.
Don't create one large service that implements n-number of service contracts. These types of services are easy to create, but will eventually become a maintenance headache and will not scale well. Plus, you'll get all sorts of code merging conflicts if there's a development group competing for check-ins/check-outs.
Don't create too many services either. Avoid the trap of making your services too fine-grained. Try to create services based on a functionality. The methods exposed by these services shouldn't be fine-grained either. You're better off having fewer methods that do more. Avoid creating similar functions like GetUserByID(int ID), GetUserByName(string Name) by creating a GetUser(userObject user). You'll have less code, easier maintenance and better discoverability.
Finally, you're probably only going to need one port no matter what you do.
UPDATE 12/2018
Funny how things have changed since I wrote this. Now with the micro-services pattern, I'm creating a lot of services with chatty APIs :)
You would typically create different services for each main entity like IInvoice, IPurchase, ISalesOrder.
Another option is to seperate queries from commands. You could have a command service for each main entity and implement business operations accepting only the data they need in order to perform the operation (avoid CRUD-like operations); and one query service that returns the data in the format required by the client. This means that the command part uses the underlying domain model/business layer; while the query service directly operates on the database (bypassing the business, which is not needed for querying). This simplifies your querying a lot and makes it more flexible (return only what the client needs).
In real time applications you have one service contract for each entity like Invoice, Purchase and SalesOrder will have separate ServiceContract
However for each service contract there will be heterogeneous clients like Invoice will be called by backoffice through windows application using netNamedPipeBinding or netTcpBinding and same time client application needs to call the service using basicHttpBinding or wsHttpBindings. Basically you need to create multiple endpoints for each service.
Its seems that you are mixing between DataContract(s) and ServiceContract(s).
You can have one ServiceContract and many DataContract(s) and that would perfectly suit your needs.
The truth is that splitting up WCF services - or any services is a balancing act. The principle is that you want to to keep downward pressure on complexity while still considering performance.
The more services you create, the more configuration you will have to write. Also, you will increase the number of proxy classes you need to create and maintain on the client side.
Putting too many ServiceContracts on one service will increase the time it takes to generate and use a proxy. But, if you only end up with one or two Operations on a contract, you will have added complexity to the system with very little to gain. This is not a scientific prescription, but a good rule of thumb could be say about 10-20 OperationContracts per ServiceContract.
Class coupling is of course a consideration, but are you really dealing with separate concerns? It depends on what your system does, but most systems deal with only a few areas of concern, so splitting things up may not actually decrease class coupling that much anyway.
Another thing to remember, and this is ultra important is to always make your methods as generic as possible. WCF deals in DataContracts for a reason. DataContracts mean that you can send any object to and from the server so long as the DataContracts are known.
So, for example, you might have 3 OperationContracts:
[OperationContract]
Person GetPerson(string id);
[OperationContract]
Dog GetDog(string id);
[OperationContract]
Cat GetCat(string id);
But, so long as these are all known types, you could merge these in to one operation like:
[OperationContract]
IDatabaseRecord GetDatabaseRecord(string recordTypeName, string id);
Ultimately, this is the most important thing to consider when designing service contracts. This applies for REST if you are using a DataContract serialization like serialization method.
Lastly, go back over your ServiceContracts every few months and delete operations that are not getting used by the clients. This is another big one!
You should take the decision based the load expected, extensibility needed and future perspective. As you wrote " small client server application for a customer" it is not giving clear idea of intended use of the development in hand. Mr. Big's answer must be considered too.
You are most welcome to put forward further question backed with specific data or particulars about the situation in hand. Thanks.
I've just begun learning WCF, and I'm coming from a total non-web background.
I have built a 3-tier desktop application, which compiles into one exe, which runs locally.
Now I want to move the whole business logics layer to a centric server, and make the GUI a client application.
As far as I understand, WCF should be my solution, as indeed, it helped me achieved what I wanted.
I mange to run remote functions, which is the basic of what I need.
My problem now, is that I don't quite understand the architecture.
For example, one of my services, returns a data type (class), from my Business Logics layer.
This class automatically becomes available to the client through the WCF mechanism.
But the problem is, this class contains some methods, which i definitely do not want to expose to the client.
For example a Save method (saves to the db).
Further more, sometimes I don't even want to allow the client to change all the properties of the class, since this class might be sent to one of my services.
I do not want to re-validate the class instance in the service.
What should I do? Should I build another layer, restricted version of the Business Logics, which I expose to the client? Or is there any way expose only part of my class to the client, without restricting the server it self?
I know this is a basic question, but honestly i've searched a lot before asking here. My problem is I don't quite know what to search.
My second question is then, do you have any recommendation for any resource that can explain me this architecture...?
Typically, if you want to encapsulate your business layer, you would not want to expose the business objects directly. This is because you now have a de-coupled client and you don't necessarily want to have to update the client every time the business logic/properties change.
This is where Data Transfer Objects (DTO) come into play nicely. Usually, you want to have control over your contract (data and methods) that you expose. Therefore, you would explicitly make other objects (DTOs) that make up the transfer layer. Then, you can safely change your client and server code independently (as long as both still fulfill the contract objects).
This usually requires a little more mapping (before you send or receive on each side) but it is often worth it.
For WCF, your interfaces and classes marked with [ServiceContract] and your classes marked with [DataContract] usually make up this transfer layer.
In WCF to expose method to client you have to mark it with OperationContractAttribute. So if you don't want clients to use your Save method, just don't mark them with with this attribute.
More info here: http://msdn.microsoft.com/en-us/library/system.servicemodel.servicecontractattribute.aspx
Pretty much same thing with properties, but different attribute: DataMemberAttribute. If you don't wont client to see it, just don't mark them with it (DataMember attribute)
But the problem is, this class contains some methods, which i definitely do not want to expose to the client.
Are you able to provide an example of your class and interface code? If so I'm sure you might be able to get more specific answers.
For example a Save method (saves to the db).
One possible approach would be to separate your class into 2 classes. Define the properties in the first class and then use that class as the base class of your second class. Then use the second class to define the methods. This would allow you to return only the properties while allowing you to keep your code DRY.
Further more, sometimes I don't even want to allow the client to change all the properties of the class, since this class might be sent to one of my services.
I do not want to re-validate the class instance in the service.
While you are able to define logic in the get and set methods for each property I would highly recommend revalidating any input received between services simply because any future changes or errors in one service could potentially lead to larger problems across your application. In addition this also helps to ensure your application is more secure against any potential attacks.
Should I build another layer, restricted version of the Business Logics, which I expose to the client? Or is there any way expose only part of my class to the client, without restricting the server it self?
I agree with the above answers that you should be able to limit access to the different properties and methods using the data and method attributes within your interfaces.
My second question is then, do you have any recommendation for any resource that can explain me this architecture...?
If you are looking for inexpensive but highly valuable video based training I've found the courses that Pluralsight offers to be quite good for both architecture as well as WFC services (btw, I am not associated with them, just enjoyed their training).
I have a windows form application(c#) and an asp.NET web application which both access Sql Server database. I want to centralize the database access. Which metedologies should i follow? What is the common approach to this issue?
Writing DAL and Model Libraries and using them in both application?
Writing WCF service including DAL model and using this service with both applicaiton?
None of the above?
Can you give me any idea?
Thank you.
I would go with the WCF approach. Keep in mind that when (not if, when) you have to make changes that pertain to one app, but not the other (yet), you will have to account for that in the common layer, so using interfaces may make your life a little easier.
The cleanest way is to wrap the DB with a WCF services.
If you don't write large amounts of data in one go you can use a WCF Data Service; this directly wraps an Entity Framework model and you can configure access to tables and methods in various ways.
What you want is to have one place where the DB is accessed, so that if there is an issue, you can fix it in one location, for instance.
Furthermore, if you want to log all calls to a particular table, for instance, the only way to make sure that will be done is by centralizing all calls to the DB this way and not allow anybody direct access to the DB.
Wrap the service, then keep the connection string secret.
I think using the SOA approach is really better (WCF or WebServices with a DAL layer) because this way you don't need to publish your DAL dll with the Windows Forms exe. Then, all changes to your data model will automatically happens to your both UI clients.
Remember that this can cause its own problems:
Concern with security so that your Services cannot be accessed directly by URL, allowing someone to run your methods.
Concern about maintenance, because changes in data layer that needs to affect only one interface will be more difficult to control and needs to be better planned before (with the creation of new methods specific to certain intercace).
Decrease in performance, because the HTTP access is always more costly than direct communication with a dll.
Risk of lack of communication with the server, something that is expected to ASP.NET but requires additional concerns in the Windows Forms client to behave properly in these cases.
Option 1 seems simpler and I would do the same.
Option 2 with WCF will add additional code to your product and hence maintenance. Also this would mean an additional layer as well.
Corporate programmers like the second option (WCF service including DAL).
Struggling with this one today.
Rewriting a web-based application; I would like to do this in such a way that:
All transactions go through a web services API (something like http://api.myapplication.com) so that customers can work with their data the same way that we do / everything they can do through our provided web interface they can also do programmatically
A class library serves as a data layer (SQL + Entity Framework), for a couple of design reasons not related to this question
Problem is, if I choose not to expose the Entity Framework objects through the web service, it's a lot of work to re-create "API" versions of the Entity Framework objects and then write all the "proxy" code to copy properties back and forth.
What's the best practice here? Suck it up and create an API model class for each object, or just use the Entity Framework versions?
Any shortcuts here from those of you who have been down this road and dealt with versioning / backwards compatibility, other headaches?
Edit: After feedback, what makes more sense may be:
Data/Service Layer - DLL used by public web interface directly as well as the Web Services API
Web Services API - almost an exact replica of the Service Layer methods / objects, with API-specific objects and proxy code
I would NOT have the website post data through the web services interface for the API. That way leads to potential performance issues of your main website. Never mind that as soon as you deploy a breaking API change you have to redeploy the main website at the same time. There are reasons why you wouldn't want to be forced to do this.
Instead, your website AND web services should both communicate directly to the underlying business/data layer(s).
Next, don't expose the EF objects themselves. The web service interface should be cleaner than this. In other words it should try and simplify the act of working with your backend as much as possible. Will this require a fair amount of effort on your part? yes. However, it will pay dividends when you have to change the model slightly without impacting currently connected clients.
It depends on project complexity and how long you expect it to live. For small, short living projects you can share domain objects across all layer's. But if it's big project, and you expect it to exist, work well, and update for next 5 years....
In my current project (which is big), I first started with shared entities across all layers, then i discovered that I need separate entities for Presentation, and now (6 month's passed) I'm using separate classes for each layer (persistence, service, domain, presentation) and that's not because i'm paranoid or was following some rules, just I couldn't make all work with single set of classes across layers... Make you conclusions..
P.S. There are tools that can help you convert your objects, like Automapper and Value Injecter.
I would just buck up and create an API specifically aimed at the needs of the application. It doesn't make much sense to what amounts to exposing the whole DB layer. Just expose what needs to be exposed in order to make the app work, and nothing else.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am having trouble deciding between two possible design choices. I have a web site which has a pretty extensive business layer and DAL (website, bll, and dal are all in multiple separate dlls). I need to design a windows service that can take some of my business objects, write them to a file, and store them locally within our network. The files are then imported into a 3rd party program which does further processing on them.
I can design this service one of two ways:
Wrap the service around the business layer and DAL. This would be quick and easy but the downside is every time the business layer changes, the service will have to be updated.
Add a web service to the web site and just query the web service for what I need. The windows service wouldn't have to use the business layer and as long as the web service doesn't change, I'll be good. The only downside is that I may have to create some basic business objects to parse the web service's xml into.
The windows service will have to poll the business layer/dal or web service every 10-20 minutes or so. The windows service is necessary because the web site is hosted offsite and thus doesn't have access to any of our local resources. I am leaning towards option 2 but I'm torn.
Given the two choices, which is the better option? Are there other possible options that I haven't considered? Also, how do you usually design for situations where you have one core set of libraries that are primarly used by a website but may end up being used either for data retrieval or to perform some function?
I'm not sure what the criteria is for storing certain business objects as files on the network, but if you're doing this on a regular basis then presumably you are trying to track changes of some kind, so there is another solution: Build the logic directly into the business/persistence layer.
If this secondary file storage is a business requirement, then it ought to be embedded directly in that tier and triggered by some sort of event. That way, instead of having an what is essentially an ad-hoc post-processing job that can get out of sync with the rest of the system, you have just one coherent system.
Invert the design - instead of wrapping a web service around the business services and using it for ad-hoc reporting, create a web service that encapsulates the data you need to receive from the export on a regular basis, and have your business tier send messages to it when new data is ready. You can send messages asynchronously so as not to tie up the business services, and depending on your reliability requirements you could set up a message queue (it's easier than it sounds, WCF already knows how to use MSMQ as the delivery mechanism, it's just a few configuration settings to change).
I can't say with any certainty that this is better than your first two options without knowing a good deal more about the architecture, the amount and type of data, the scheduling and reporting requirements, etc., but it is something you should consider. If you think that your business services are likely to change fairly frequently, then it might work better have it push data outward to a "warehouse" type abstraction rather than having a mining process to pull it.
Otherwise, I think I would go with option 2. I don't know if you've worked with WCF services before but you should know that you never actually have to parse XML. Everything is done through data contracts and when you generate a proxy for the web service, you get strongly-typed .NET objects. If you can pass your domain objects directly through the service API then it's really very little work at all to create the web service.
The real downside to a web service is that you have to take steps to ensure that your service contract never substantially changes (otherwise it can break clients). So you might eventually end up needing to create Data Transfer Objects on the service side to use as the public API instead of passing through domain objects. But in many cases you won't need to do this for a good long while, so go ahead and try it out, you'll see that it's pretty straightforward.
A variant of option two:
Add a WCF service to the site, exposing the information required as basic DTO DataContracts.
You could use AutoMapper or similar within the WCF service to handle the boring bit of converting your business objects to DTOs.
From your point two I understand, that you would just add the web-api for this extra-service. Thus, you would have to update two parts for any changes (extra-service, web-api, dll). With option one you would only have to update two parts (extra-service, dll), thus I would go with one.
BUT if you target for a general web api which you always have to maintain, go with option two.
For more flexibility instead of hard-wrapping your service around business and DAL, and instead of relying on the web site (through integrated web service) make use of design concepts like: interfaces, dynamic Type loading, Inversion of Control so your service is a thin decoupled layer that communicates with the business and DAL and allows for dynamic updates of the business and DAL without recompiling the service. Maybe put assemblies in the Global Assembly Cache of the machine to be shared across various other projects assemblies and apps.
I know it seems like throwing out jargon for the sake of it but that's how I would start to think.
Edit:
Loading types dynamically is actually amazing and easy. This is a quick C# pseudo code for one way, and without testing it might actually be right.
// Get a System.Type from string representation
Type t = Type.GetType("type name");
// Create instance of type.
object o = Activator.CreateInstance(t);
// Cast it to the interface (or actual Type) you're working with.
IMyInterface strongObject = (IMyInterface)o;
// ... and continue from there with the instance.
Instructions about how to formulate the string representation of a type name can be found in MSDN under Type.AssemblyQualifiedName, Type.GetType and similar places. In short you can see a lot of assembly qualified type names in the app.config or web.config files because they use the same format.