Passing custom complex objects between server and client in C# - c#

I have a desktop C# app that I want to split into two parts - server part and client part. My app is already split into two very independent parts that communicate by exchanging some (complex!) objects.
If I want to put one part of my app on some web server, what kind of technology should I use for passing those custom complex objects between the server part and client part? I was thinking about WCF, but...I'm not sure that WCF can easily handle (send/receive) custom objects (composed by many other custom objects). I don't need WCF because I'm not planning to offer my service to any third-party, I'm not planning to port my client app to other OS...
That's why I'm confused and need your help: what kind of remoting technology should I use in my case?

WCF stands for Windows Communication Foundation. In other words its about general cross process/machine communication and not limited to hetrogenous systems
One thing to remember about WCF is, despite appearances, you are not actually passing objects at all - the objects are used by a serializer to generate messages.At the other end it will deserialize into an independent copy. You don't, unlike COM, get a reference back to an object on the sender.
The reason this is important is because if the complex objects have non-serializable state such as a socket connection then this won't make it to the receiver side
Also, with the DataContractSerializer (which is the default) unless your objects are annotated with the [Serializable] attribute or you annotate the classes with [DataContract] and [DataMember] you will only be sending state that is exposed publicly (via a public field or a property).
This isn't purely a problem for WCF; Remoting requires objects derive from MarshalByRefObject or are annotated with the [Serializable] attribute. Building distributed systems is quite different from building systems that all share the same memory address space. You have to think carefully how you define that boundary between the distributed pieces because, for example, lots of small calls will kill your performance rather than few data rich calls (although from your description this might not be an issue that affects you)
So WCF can handle arbitrarily complex object graphs but just remember the above points about serialization

Well, DataContracts in WCF support complex objects, so I don't see a problem with that (how complex are your objects); however you should probably use the technology that is sufficient in your case. You can use Remoting, hell, even Sockets; but it is in almost all cases overkill and going too low in .NET stack for nothing; you will just be wasting your time in implementation.
If you have no reason against WCF, I would go that way, because it is very simple and powerful. There are also standard ASP.NET ASMX web services if you'd like.
One thing to note, whichever the technology, you should have your code structured in a distribution layer, exposing coarse-grained methods.

Related

wcf decision: one service multiple contracts or many services

I am using .NET 4 to create a small client server application for a customer. Should I create one giant service that implements many contracts (IInvoice, IPurchase, ISalesOrder, etc) or should I create many services running one contract each on many ports? My questions specifically is interested in the pros/cons of either choice. Also, what is the common way of answering this question?
My true dilemma is that I have no experience making this decision, and I have little enough experience with wcf that I need help understanding the technical implications of such a decision.
Don't create one large service that implements n-number of service contracts. These types of services are easy to create, but will eventually become a maintenance headache and will not scale well. Plus, you'll get all sorts of code merging conflicts if there's a development group competing for check-ins/check-outs.
Don't create too many services either. Avoid the trap of making your services too fine-grained. Try to create services based on a functionality. The methods exposed by these services shouldn't be fine-grained either. You're better off having fewer methods that do more. Avoid creating similar functions like GetUserByID(int ID), GetUserByName(string Name) by creating a GetUser(userObject user). You'll have less code, easier maintenance and better discoverability.
Finally, you're probably only going to need one port no matter what you do.
UPDATE 12/2018
Funny how things have changed since I wrote this. Now with the micro-services pattern, I'm creating a lot of services with chatty APIs :)
You would typically create different services for each main entity like IInvoice, IPurchase, ISalesOrder.
Another option is to seperate queries from commands. You could have a command service for each main entity and implement business operations accepting only the data they need in order to perform the operation (avoid CRUD-like operations); and one query service that returns the data in the format required by the client. This means that the command part uses the underlying domain model/business layer; while the query service directly operates on the database (bypassing the business, which is not needed for querying). This simplifies your querying a lot and makes it more flexible (return only what the client needs).
In real time applications you have one service contract for each entity like Invoice, Purchase and SalesOrder will have separate ServiceContract
However for each service contract there will be heterogeneous clients like Invoice will be called by backoffice through windows application using netNamedPipeBinding or netTcpBinding and same time client application needs to call the service using basicHttpBinding or wsHttpBindings. Basically you need to create multiple endpoints for each service.
Its seems that you are mixing between DataContract(s) and ServiceContract(s).
You can have one ServiceContract and many DataContract(s) and that would perfectly suit your needs.
The truth is that splitting up WCF services - or any services is a balancing act. The principle is that you want to to keep downward pressure on complexity while still considering performance.
The more services you create, the more configuration you will have to write. Also, you will increase the number of proxy classes you need to create and maintain on the client side.
Putting too many ServiceContracts on one service will increase the time it takes to generate and use a proxy. But, if you only end up with one or two Operations on a contract, you will have added complexity to the system with very little to gain. This is not a scientific prescription, but a good rule of thumb could be say about 10-20 OperationContracts per ServiceContract.
Class coupling is of course a consideration, but are you really dealing with separate concerns? It depends on what your system does, but most systems deal with only a few areas of concern, so splitting things up may not actually decrease class coupling that much anyway.
Another thing to remember, and this is ultra important is to always make your methods as generic as possible. WCF deals in DataContracts for a reason. DataContracts mean that you can send any object to and from the server so long as the DataContracts are known.
So, for example, you might have 3 OperationContracts:
[OperationContract]
Person GetPerson(string id);
[OperationContract]
Dog GetDog(string id);
[OperationContract]
Cat GetCat(string id);
But, so long as these are all known types, you could merge these in to one operation like:
[OperationContract]
IDatabaseRecord GetDatabaseRecord(string recordTypeName, string id);
Ultimately, this is the most important thing to consider when designing service contracts. This applies for REST if you are using a DataContract serialization like serialization method.
Lastly, go back over your ServiceContracts every few months and delete operations that are not getting used by the clients. This is another big one!
You should take the decision based the load expected, extensibility needed and future perspective. As you wrote " small client server application for a customer" it is not giving clear idea of intended use of the development in hand. Mr. Big's answer must be considered too.
You are most welcome to put forward further question backed with specific data or particulars about the situation in hand. Thanks.

Prevent WCF exposing my whole class?

I've just begun learning WCF, and I'm coming from a total non-web background.
I have built a 3-tier desktop application, which compiles into one exe, which runs locally.
Now I want to move the whole business logics layer to a centric server, and make the GUI a client application.
As far as I understand, WCF should be my solution, as indeed, it helped me achieved what I wanted.
I mange to run remote functions, which is the basic of what I need.
My problem now, is that I don't quite understand the architecture.
For example, one of my services, returns a data type (class), from my Business Logics layer.
This class automatically becomes available to the client through the WCF mechanism.
But the problem is, this class contains some methods, which i definitely do not want to expose to the client.
For example a Save method (saves to the db).
Further more, sometimes I don't even want to allow the client to change all the properties of the class, since this class might be sent to one of my services.
I do not want to re-validate the class instance in the service.
What should I do? Should I build another layer, restricted version of the Business Logics, which I expose to the client? Or is there any way expose only part of my class to the client, without restricting the server it self?
I know this is a basic question, but honestly i've searched a lot before asking here. My problem is I don't quite know what to search.
My second question is then, do you have any recommendation for any resource that can explain me this architecture...?
Typically, if you want to encapsulate your business layer, you would not want to expose the business objects directly. This is because you now have a de-coupled client and you don't necessarily want to have to update the client every time the business logic/properties change.
This is where Data Transfer Objects (DTO) come into play nicely. Usually, you want to have control over your contract (data and methods) that you expose. Therefore, you would explicitly make other objects (DTOs) that make up the transfer layer. Then, you can safely change your client and server code independently (as long as both still fulfill the contract objects).
This usually requires a little more mapping (before you send or receive on each side) but it is often worth it.
For WCF, your interfaces and classes marked with [ServiceContract] and your classes marked with [DataContract] usually make up this transfer layer.
In WCF to expose method to client you have to mark it with OperationContractAttribute. So if you don't want clients to use your Save method, just don't mark them with with this attribute.
More info here: http://msdn.microsoft.com/en-us/library/system.servicemodel.servicecontractattribute.aspx
Pretty much same thing with properties, but different attribute: DataMemberAttribute. If you don't wont client to see it, just don't mark them with it (DataMember attribute)
But the problem is, this class contains some methods, which i definitely do not want to expose to the client.
Are you able to provide an example of your class and interface code? If so I'm sure you might be able to get more specific answers.
For example a Save method (saves to the db).
One possible approach would be to separate your class into 2 classes. Define the properties in the first class and then use that class as the base class of your second class. Then use the second class to define the methods. This would allow you to return only the properties while allowing you to keep your code DRY.
Further more, sometimes I don't even want to allow the client to change all the properties of the class, since this class might be sent to one of my services.
I do not want to re-validate the class instance in the service.
While you are able to define logic in the get and set methods for each property I would highly recommend revalidating any input received between services simply because any future changes or errors in one service could potentially lead to larger problems across your application. In addition this also helps to ensure your application is more secure against any potential attacks.
Should I build another layer, restricted version of the Business Logics, which I expose to the client? Or is there any way expose only part of my class to the client, without restricting the server it self?
I agree with the above answers that you should be able to limit access to the different properties and methods using the data and method attributes within your interfaces.
My second question is then, do you have any recommendation for any resource that can explain me this architecture...?
If you are looking for inexpensive but highly valuable video based training I've found the courses that Pluralsight offers to be quite good for both architecture as well as WFC services (btw, I am not associated with them, just enjoyed their training).

ASP.NET MVC using Loosely-Coupled WCF Web Service

The reason why I need loosely-coupled WCF because Entity Framework is tightly-coupled. When I say loosely-coupled, there's no need to instantiate the database context or add the service reference of WCF. It just rely on web configuration or some .ini file that does not require compilation when developers need to change servers, ip address or service url's.
Instead, the MVC(say controller) will just send request message and then gets the response data from WCF service. But still we cannot afford without having Models based on the database (since we need it in intellisense for views markup), where the WCF will get the data. Let say we have those database objects class already, create some repository that binds the WCF data to the MVC Models.
What I mean of WCF web service, it ONLY contains messages, no more passing of object reference, because thats the new SOA definition. It makes more sense to pass messages instead of objects.
Is this a better approach? In terms of scalability and performance, I don't mean to offend the Entity Framework Fans.
It is an entirely valid approach to define a WCF web service in terms of message schemas which just use basic types, so that clients need know nothing about WCF in order to use the service. WCF would be useless for interop with other platforms (e.g. Java) otherwise.
Understand that WCF is a general and powerful framework for implementing communication over a variety of transport protocols. It can be equally effectively used for raw XML messaging as for programming in terms of objects. Object serialisation and deserialisation is an optional extra of the framework, not a requirement. (There is really no such thing as "passing of object reference" - ultimately it is an XML infoset which travels across the communication channel. Also, Entity Framework is not part of WCF - it is a distinct ORM Framework which you can use with WCF if you want, but that's your choice.)
Scalability and performance is entirely orthogonal to the design of the service in terms of its data and operation contracts. You should feel free to adopt whatever approach to defining your services is best for your application. If that's XML messages, that's fine - don't let anyone tell you otherwise.

Business library reuse or exposing services [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am having trouble deciding between two possible design choices. I have a web site which has a pretty extensive business layer and DAL (website, bll, and dal are all in multiple separate dlls). I need to design a windows service that can take some of my business objects, write them to a file, and store them locally within our network. The files are then imported into a 3rd party program which does further processing on them.
I can design this service one of two ways:
Wrap the service around the business layer and DAL. This would be quick and easy but the downside is every time the business layer changes, the service will have to be updated.
Add a web service to the web site and just query the web service for what I need. The windows service wouldn't have to use the business layer and as long as the web service doesn't change, I'll be good. The only downside is that I may have to create some basic business objects to parse the web service's xml into.
The windows service will have to poll the business layer/dal or web service every 10-20 minutes or so. The windows service is necessary because the web site is hosted offsite and thus doesn't have access to any of our local resources. I am leaning towards option 2 but I'm torn.
Given the two choices, which is the better option? Are there other possible options that I haven't considered? Also, how do you usually design for situations where you have one core set of libraries that are primarly used by a website but may end up being used either for data retrieval or to perform some function?
I'm not sure what the criteria is for storing certain business objects as files on the network, but if you're doing this on a regular basis then presumably you are trying to track changes of some kind, so there is another solution: Build the logic directly into the business/persistence layer.
If this secondary file storage is a business requirement, then it ought to be embedded directly in that tier and triggered by some sort of event. That way, instead of having an what is essentially an ad-hoc post-processing job that can get out of sync with the rest of the system, you have just one coherent system.
Invert the design - instead of wrapping a web service around the business services and using it for ad-hoc reporting, create a web service that encapsulates the data you need to receive from the export on a regular basis, and have your business tier send messages to it when new data is ready. You can send messages asynchronously so as not to tie up the business services, and depending on your reliability requirements you could set up a message queue (it's easier than it sounds, WCF already knows how to use MSMQ as the delivery mechanism, it's just a few configuration settings to change).
I can't say with any certainty that this is better than your first two options without knowing a good deal more about the architecture, the amount and type of data, the scheduling and reporting requirements, etc., but it is something you should consider. If you think that your business services are likely to change fairly frequently, then it might work better have it push data outward to a "warehouse" type abstraction rather than having a mining process to pull it.
Otherwise, I think I would go with option 2. I don't know if you've worked with WCF services before but you should know that you never actually have to parse XML. Everything is done through data contracts and when you generate a proxy for the web service, you get strongly-typed .NET objects. If you can pass your domain objects directly through the service API then it's really very little work at all to create the web service.
The real downside to a web service is that you have to take steps to ensure that your service contract never substantially changes (otherwise it can break clients). So you might eventually end up needing to create Data Transfer Objects on the service side to use as the public API instead of passing through domain objects. But in many cases you won't need to do this for a good long while, so go ahead and try it out, you'll see that it's pretty straightforward.
A variant of option two:
Add a WCF service to the site, exposing the information required as basic DTO DataContracts.
You could use AutoMapper or similar within the WCF service to handle the boring bit of converting your business objects to DTOs.
From your point two I understand, that you would just add the web-api for this extra-service. Thus, you would have to update two parts for any changes (extra-service, web-api, dll). With option one you would only have to update two parts (extra-service, dll), thus I would go with one.
BUT if you target for a general web api which you always have to maintain, go with option two.
For more flexibility instead of hard-wrapping your service around business and DAL, and instead of relying on the web site (through integrated web service) make use of design concepts like: interfaces, dynamic Type loading, Inversion of Control so your service is a thin decoupled layer that communicates with the business and DAL and allows for dynamic updates of the business and DAL without recompiling the service. Maybe put assemblies in the Global Assembly Cache of the machine to be shared across various other projects assemblies and apps.
I know it seems like throwing out jargon for the sake of it but that's how I would start to think.
Edit:
Loading types dynamically is actually amazing and easy. This is a quick C# pseudo code for one way, and without testing it might actually be right.
// Get a System.Type from string representation
Type t = Type.GetType("type name");
// Create instance of type.
object o = Activator.CreateInstance(t);
// Cast it to the interface (or actual Type) you're working with.
IMyInterface strongObject = (IMyInterface)o;
// ... and continue from there with the instance.
Instructions about how to formulate the string representation of a type name can be found in MSDN under Type.AssemblyQualifiedName, Type.GetType and similar places. In short you can see a lot of assembly qualified type names in the app.config or web.config files because they use the same format.

Sharing domain model with WCF service

Is it good practice to reference my web applications domain layer class library to WCF service application.
Doing that gives me easy access to the already existing classes on my domain model so that I will not need to re-define similar classes to be used by the WCF service
On the other hand, I don't like the coupling that it creates between the application and service and i am curious if it could create difficulties for me on the long run.
I also think having dedicated classes for my WCF app would be more efficient since those classes will only contain the members that will be used by the service and nothing else. If I use the classes from my domain layer there will be many fields in the classes that will not be used by the service and it will cause unnecessary data transfer.
I will appreciate if you can give me your thoughts from your experience
No it's not. Entities are all about behaviour. data contract is all about... data. Plus as you mentioned you wouldn't want to couple them together, because it will cripple you ability to react to change very soon.
For those still coming across this post, like I....
Checkout this site. Its a good explanation on the topic.
Conclusion: Go through the effort of keeping the boundaries of your architecture clear and clean. You will get some credit for it some day ;)
I personally frown on directly passing domain objects directly through WCF. As Krzysztof said, it's about a data contract not a contract about the behavior of the the thing you are passing over the wire.
I typically do this:
Define the data contracts in their own assembly
The service has a reference to both the data contracts assembly and the business entity assemblies.
Create extension methods in the service namespace that map the entities to their corresponding data contracts and vice versa.
Putting the conceptual purity of what a "Data Contract" is aside, If you begin to pass entities around you are setting up your shared entity to pulled in different design directions by each side of the WCF boundary. Inevitably you'll end up with behaviors that only belong to one side, or even worse - have to expose methods that conceptually do the same thing but in a different way for each side of the WCF boundary. It can potentially get very messy over the long term.

Categories