A suitable design pattern for similar web service implementations - c#

I'm consuming a SOAP web service that creates a separate service point and WSDL for each of its customers. I don't know why the do that. But e.g. if they have two clients A and B, the service designates two different service addresses with different WSDL addresses. These separate WSDLs are 90% the same objects and same functions but some of them are different based on the type of the customer. Therefore the created objects are eventually not the same even though they work exactly the same way.
So in order to fetch the correct service, I store the name of the customer somewhere on a table ("A" or "B") and my program has to know which customer its dealing with every run. I don't want to have different programs for each customer. I just want my program to get the customer name and based on that understand which model and which controller functions it will use.
What is the design pattern(s) that will help me facilitate this issue?
Chances are, in the future there will be an additional customer, so I want my code to be as loosely-coupled as it gets.
I have always wanted to use design patterns correctly in my code so I guess it's time to do so. Should I use a Strategy Pattern? Can you briefly explain what is the best solution for this?

I would use two design patterns in your case. The first one would be the Facade pattern. Use the facade pattern to simplify the interface of the web services your application has to deal with. Make sure you only need to change the implementation of the facade when the webservice contract changes. Convert the objects from the service into objects under your control and call functions with names and parameters that fit your domain and abstraction level.
The second design pattern would be the Adapter pattern. In your case, you should determine if you can decide on a common interface for both web services. So, if the 10% difference between the two services can be converted into one interface (of objects and/or functions) you use in your application.
The facade would use adapters to convert the 10% difference into common objects and functions. After that, the facade uses the common objects and functions, as well as the other 90% of the web services, to supply a proper abstraction layer for your application.
If there are additional customers in the future, you'll most likely only need to add or modify an adapter.

Related

Organizing service of BusinessLayer

currently trying to think of a strategy for implementing services in the business layer. My first approach was to implement a service functionality per class, but the number of functionalities will eventually grow and become hard to call from presentation layer since id have to remember them all (Large amounts of classes). The opposite alternative would be to have one single class with all services implemented which would created a gigantic file.
I've seen implementations that implement functionalities(methods) inside each a class such has (ProductBLL ou CompanyBLL) which would make the services more manageable, however some services such as "getmeProductsAndCompanies" which are somewhat frequent doesn't seem to belong neither to ProductBLL nor CompanyBLL.
My question is: Is it good idea to make a class AplicationService that has a method per Service that instantiates the correct ServiceClass and correct method? My goal with this was to instantiate in PL AplicationService as and as.getmeProductsAndCompanies()
The internet material i passed through so far has very theoretical or very complex solutions. I am open to suggestions too.
My first approach was to implement a service functionality per class,
but the number of functionalities will eventually grow and become hard
to call from presentation layer since id have to remember them all
(Large amounts of classes)
I do not think aggregating all services into a single facade will help matters. It will only complicate them. Consider instead structuring services and devising some naming pattern for them.
For example, you have OrderService that does everything to orders (bad name choice, btw ;) ). Eventually it grows too big, and when this happens, you must split it in two. When splitting, you must use functional approach to naming. The name of the service must answer the question "What does this service do exactly and with what types of data". For example, OrderDisplayService looks like a good choice to me.
When you must find out which service you need to inject into your governor entity (MVC-like controller, usually), you must first type service namespace name (\Acme\Services\), then the object name you want to deal with (Order), then type a verb, describing what exactly you want to do with it (Display) and then press your IDE autocomplete buttons. You will have a relatively short list of services, available for injection (I suppose you use some IoC container for that).
Split your services into layers or units so that when you work in IDE, you could see only a functionally complete part of them in the currently expanded directory
Use composite pattern. Basically, you create as many small classes / functions as you can. Then those parts will be called by bigger class/function. Then the bigger class can also be used by some more bigger class again.

wcf decision: one service multiple contracts or many services

I am using .NET 4 to create a small client server application for a customer. Should I create one giant service that implements many contracts (IInvoice, IPurchase, ISalesOrder, etc) or should I create many services running one contract each on many ports? My questions specifically is interested in the pros/cons of either choice. Also, what is the common way of answering this question?
My true dilemma is that I have no experience making this decision, and I have little enough experience with wcf that I need help understanding the technical implications of such a decision.
Don't create one large service that implements n-number of service contracts. These types of services are easy to create, but will eventually become a maintenance headache and will not scale well. Plus, you'll get all sorts of code merging conflicts if there's a development group competing for check-ins/check-outs.
Don't create too many services either. Avoid the trap of making your services too fine-grained. Try to create services based on a functionality. The methods exposed by these services shouldn't be fine-grained either. You're better off having fewer methods that do more. Avoid creating similar functions like GetUserByID(int ID), GetUserByName(string Name) by creating a GetUser(userObject user). You'll have less code, easier maintenance and better discoverability.
Finally, you're probably only going to need one port no matter what you do.
UPDATE 12/2018
Funny how things have changed since I wrote this. Now with the micro-services pattern, I'm creating a lot of services with chatty APIs :)
You would typically create different services for each main entity like IInvoice, IPurchase, ISalesOrder.
Another option is to seperate queries from commands. You could have a command service for each main entity and implement business operations accepting only the data they need in order to perform the operation (avoid CRUD-like operations); and one query service that returns the data in the format required by the client. This means that the command part uses the underlying domain model/business layer; while the query service directly operates on the database (bypassing the business, which is not needed for querying). This simplifies your querying a lot and makes it more flexible (return only what the client needs).
In real time applications you have one service contract for each entity like Invoice, Purchase and SalesOrder will have separate ServiceContract
However for each service contract there will be heterogeneous clients like Invoice will be called by backoffice through windows application using netNamedPipeBinding or netTcpBinding and same time client application needs to call the service using basicHttpBinding or wsHttpBindings. Basically you need to create multiple endpoints for each service.
Its seems that you are mixing between DataContract(s) and ServiceContract(s).
You can have one ServiceContract and many DataContract(s) and that would perfectly suit your needs.
The truth is that splitting up WCF services - or any services is a balancing act. The principle is that you want to to keep downward pressure on complexity while still considering performance.
The more services you create, the more configuration you will have to write. Also, you will increase the number of proxy classes you need to create and maintain on the client side.
Putting too many ServiceContracts on one service will increase the time it takes to generate and use a proxy. But, if you only end up with one or two Operations on a contract, you will have added complexity to the system with very little to gain. This is not a scientific prescription, but a good rule of thumb could be say about 10-20 OperationContracts per ServiceContract.
Class coupling is of course a consideration, but are you really dealing with separate concerns? It depends on what your system does, but most systems deal with only a few areas of concern, so splitting things up may not actually decrease class coupling that much anyway.
Another thing to remember, and this is ultra important is to always make your methods as generic as possible. WCF deals in DataContracts for a reason. DataContracts mean that you can send any object to and from the server so long as the DataContracts are known.
So, for example, you might have 3 OperationContracts:
[OperationContract]
Person GetPerson(string id);
[OperationContract]
Dog GetDog(string id);
[OperationContract]
Cat GetCat(string id);
But, so long as these are all known types, you could merge these in to one operation like:
[OperationContract]
IDatabaseRecord GetDatabaseRecord(string recordTypeName, string id);
Ultimately, this is the most important thing to consider when designing service contracts. This applies for REST if you are using a DataContract serialization like serialization method.
Lastly, go back over your ServiceContracts every few months and delete operations that are not getting used by the clients. This is another big one!
You should take the decision based the load expected, extensibility needed and future perspective. As you wrote " small client server application for a customer" it is not giving clear idea of intended use of the development in hand. Mr. Big's answer must be considered too.
You are most welcome to put forward further question backed with specific data or particulars about the situation in hand. Thanks.

Prevent WCF exposing my whole class?

I've just begun learning WCF, and I'm coming from a total non-web background.
I have built a 3-tier desktop application, which compiles into one exe, which runs locally.
Now I want to move the whole business logics layer to a centric server, and make the GUI a client application.
As far as I understand, WCF should be my solution, as indeed, it helped me achieved what I wanted.
I mange to run remote functions, which is the basic of what I need.
My problem now, is that I don't quite understand the architecture.
For example, one of my services, returns a data type (class), from my Business Logics layer.
This class automatically becomes available to the client through the WCF mechanism.
But the problem is, this class contains some methods, which i definitely do not want to expose to the client.
For example a Save method (saves to the db).
Further more, sometimes I don't even want to allow the client to change all the properties of the class, since this class might be sent to one of my services.
I do not want to re-validate the class instance in the service.
What should I do? Should I build another layer, restricted version of the Business Logics, which I expose to the client? Or is there any way expose only part of my class to the client, without restricting the server it self?
I know this is a basic question, but honestly i've searched a lot before asking here. My problem is I don't quite know what to search.
My second question is then, do you have any recommendation for any resource that can explain me this architecture...?
Typically, if you want to encapsulate your business layer, you would not want to expose the business objects directly. This is because you now have a de-coupled client and you don't necessarily want to have to update the client every time the business logic/properties change.
This is where Data Transfer Objects (DTO) come into play nicely. Usually, you want to have control over your contract (data and methods) that you expose. Therefore, you would explicitly make other objects (DTOs) that make up the transfer layer. Then, you can safely change your client and server code independently (as long as both still fulfill the contract objects).
This usually requires a little more mapping (before you send or receive on each side) but it is often worth it.
For WCF, your interfaces and classes marked with [ServiceContract] and your classes marked with [DataContract] usually make up this transfer layer.
In WCF to expose method to client you have to mark it with OperationContractAttribute. So if you don't want clients to use your Save method, just don't mark them with with this attribute.
More info here: http://msdn.microsoft.com/en-us/library/system.servicemodel.servicecontractattribute.aspx
Pretty much same thing with properties, but different attribute: DataMemberAttribute. If you don't wont client to see it, just don't mark them with it (DataMember attribute)
But the problem is, this class contains some methods, which i definitely do not want to expose to the client.
Are you able to provide an example of your class and interface code? If so I'm sure you might be able to get more specific answers.
For example a Save method (saves to the db).
One possible approach would be to separate your class into 2 classes. Define the properties in the first class and then use that class as the base class of your second class. Then use the second class to define the methods. This would allow you to return only the properties while allowing you to keep your code DRY.
Further more, sometimes I don't even want to allow the client to change all the properties of the class, since this class might be sent to one of my services.
I do not want to re-validate the class instance in the service.
While you are able to define logic in the get and set methods for each property I would highly recommend revalidating any input received between services simply because any future changes or errors in one service could potentially lead to larger problems across your application. In addition this also helps to ensure your application is more secure against any potential attacks.
Should I build another layer, restricted version of the Business Logics, which I expose to the client? Or is there any way expose only part of my class to the client, without restricting the server it self?
I agree with the above answers that you should be able to limit access to the different properties and methods using the data and method attributes within your interfaces.
My second question is then, do you have any recommendation for any resource that can explain me this architecture...?
If you are looking for inexpensive but highly valuable video based training I've found the courses that Pluralsight offers to be quite good for both architecture as well as WFC services (btw, I am not associated with them, just enjoyed their training).

Business library reuse or exposing services [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am having trouble deciding between two possible design choices. I have a web site which has a pretty extensive business layer and DAL (website, bll, and dal are all in multiple separate dlls). I need to design a windows service that can take some of my business objects, write them to a file, and store them locally within our network. The files are then imported into a 3rd party program which does further processing on them.
I can design this service one of two ways:
Wrap the service around the business layer and DAL. This would be quick and easy but the downside is every time the business layer changes, the service will have to be updated.
Add a web service to the web site and just query the web service for what I need. The windows service wouldn't have to use the business layer and as long as the web service doesn't change, I'll be good. The only downside is that I may have to create some basic business objects to parse the web service's xml into.
The windows service will have to poll the business layer/dal or web service every 10-20 minutes or so. The windows service is necessary because the web site is hosted offsite and thus doesn't have access to any of our local resources. I am leaning towards option 2 but I'm torn.
Given the two choices, which is the better option? Are there other possible options that I haven't considered? Also, how do you usually design for situations where you have one core set of libraries that are primarly used by a website but may end up being used either for data retrieval or to perform some function?
I'm not sure what the criteria is for storing certain business objects as files on the network, but if you're doing this on a regular basis then presumably you are trying to track changes of some kind, so there is another solution: Build the logic directly into the business/persistence layer.
If this secondary file storage is a business requirement, then it ought to be embedded directly in that tier and triggered by some sort of event. That way, instead of having an what is essentially an ad-hoc post-processing job that can get out of sync with the rest of the system, you have just one coherent system.
Invert the design - instead of wrapping a web service around the business services and using it for ad-hoc reporting, create a web service that encapsulates the data you need to receive from the export on a regular basis, and have your business tier send messages to it when new data is ready. You can send messages asynchronously so as not to tie up the business services, and depending on your reliability requirements you could set up a message queue (it's easier than it sounds, WCF already knows how to use MSMQ as the delivery mechanism, it's just a few configuration settings to change).
I can't say with any certainty that this is better than your first two options without knowing a good deal more about the architecture, the amount and type of data, the scheduling and reporting requirements, etc., but it is something you should consider. If you think that your business services are likely to change fairly frequently, then it might work better have it push data outward to a "warehouse" type abstraction rather than having a mining process to pull it.
Otherwise, I think I would go with option 2. I don't know if you've worked with WCF services before but you should know that you never actually have to parse XML. Everything is done through data contracts and when you generate a proxy for the web service, you get strongly-typed .NET objects. If you can pass your domain objects directly through the service API then it's really very little work at all to create the web service.
The real downside to a web service is that you have to take steps to ensure that your service contract never substantially changes (otherwise it can break clients). So you might eventually end up needing to create Data Transfer Objects on the service side to use as the public API instead of passing through domain objects. But in many cases you won't need to do this for a good long while, so go ahead and try it out, you'll see that it's pretty straightforward.
A variant of option two:
Add a WCF service to the site, exposing the information required as basic DTO DataContracts.
You could use AutoMapper or similar within the WCF service to handle the boring bit of converting your business objects to DTOs.
From your point two I understand, that you would just add the web-api for this extra-service. Thus, you would have to update two parts for any changes (extra-service, web-api, dll). With option one you would only have to update two parts (extra-service, dll), thus I would go with one.
BUT if you target for a general web api which you always have to maintain, go with option two.
For more flexibility instead of hard-wrapping your service around business and DAL, and instead of relying on the web site (through integrated web service) make use of design concepts like: interfaces, dynamic Type loading, Inversion of Control so your service is a thin decoupled layer that communicates with the business and DAL and allows for dynamic updates of the business and DAL without recompiling the service. Maybe put assemblies in the Global Assembly Cache of the machine to be shared across various other projects assemblies and apps.
I know it seems like throwing out jargon for the sake of it but that's how I would start to think.
Edit:
Loading types dynamically is actually amazing and easy. This is a quick C# pseudo code for one way, and without testing it might actually be right.
// Get a System.Type from string representation
Type t = Type.GetType("type name");
// Create instance of type.
object o = Activator.CreateInstance(t);
// Cast it to the interface (or actual Type) you're working with.
IMyInterface strongObject = (IMyInterface)o;
// ... and continue from there with the instance.
Instructions about how to formulate the string representation of a type name can be found in MSDN under Type.AssemblyQualifiedName, Type.GetType and similar places. In short you can see a lot of assembly qualified type names in the app.config or web.config files because they use the same format.

Sharing domain model with WCF service

Is it good practice to reference my web applications domain layer class library to WCF service application.
Doing that gives me easy access to the already existing classes on my domain model so that I will not need to re-define similar classes to be used by the WCF service
On the other hand, I don't like the coupling that it creates between the application and service and i am curious if it could create difficulties for me on the long run.
I also think having dedicated classes for my WCF app would be more efficient since those classes will only contain the members that will be used by the service and nothing else. If I use the classes from my domain layer there will be many fields in the classes that will not be used by the service and it will cause unnecessary data transfer.
I will appreciate if you can give me your thoughts from your experience
No it's not. Entities are all about behaviour. data contract is all about... data. Plus as you mentioned you wouldn't want to couple them together, because it will cripple you ability to react to change very soon.
For those still coming across this post, like I....
Checkout this site. Its a good explanation on the topic.
Conclusion: Go through the effort of keeping the boundaries of your architecture clear and clean. You will get some credit for it some day ;)
I personally frown on directly passing domain objects directly through WCF. As Krzysztof said, it's about a data contract not a contract about the behavior of the the thing you are passing over the wire.
I typically do this:
Define the data contracts in their own assembly
The service has a reference to both the data contracts assembly and the business entity assemblies.
Create extension methods in the service namespace that map the entities to their corresponding data contracts and vice versa.
Putting the conceptual purity of what a "Data Contract" is aside, If you begin to pass entities around you are setting up your shared entity to pulled in different design directions by each side of the WCF boundary. Inevitably you'll end up with behaviors that only belong to one side, or even worse - have to expose methods that conceptually do the same thing but in a different way for each side of the WCF boundary. It can potentially get very messy over the long term.

Categories