wcf best design principles - c#

I am looking to make some changes to an existing WCF service. I wanted to know if it would be best to make super methods such as a Save() that would use the values received to decide what action to take or if I should break the actions out into their own methods and expose those for the consumer to decide when to call them.
For instance, I have a payment handler that receives notifications from our merchant when they make a payment attempt and its results. Would it be better for me to allow the handler to pass in the object with a status change and let the super method attempt to figure out what to do with it (assuming no bugs have messed the data up) or create a separate method to extend so the intention is clearly defined.
Note that the super method is also responsible for saving data and changing status along other steps in the process.
I have googled around but haven't really found anything specific. To me the super method violates SOLID but I was told WCF has a different set of standards and its best to make super methods so the consumers don't have to think.
Any feed back is welcome :)

I find that it's best if service operations can exist at a level where they have business meaning.
What this means is that if a business person was told the operation name, they would understand roughly what calling that operation would do, and could make a guess at what data it would require to be passed to it.
For this to happen your operations should fulfill in full or in part some business process.
For example, the following operation signatures have business meaning:
void SolicitQuote(int brokerId, int userId, DateTime quoteRequiredBy);
int BindPolicyDocument(byte[] document, SomeType documentMetadata);
Guid BeginOnboardEmployee(string employeeName, DateTime employeeDateOfBirth);
If you use this principal when thinking about service composition then the benefit is that you will rarely stray far from the optimal path; you know what each operation does and you know when an operation is no longer needed.
An additional benefit is that because business processes change fairly rarely you will not need to change your service contracts as much.

Related

Invoking WCF IServiceBehavior and IOperationBehavior without calling a method

I'd like to be able to add a couple of behaviors without having to call a method forcing them to be to used. A typical example is via an [InvokeErrorSupport] attribute whose purpose would be to fire off a test e-mail when deploying a service to ensure the error e-mails are coming through okay. Thus saving magic strings in a request Parameter Object, one or more non-business logic related [OperationContract] methods etc dirtying up the contract (with single responsibility in mind). We are more than happy to potentially invoke a method in other case such as our [Heartbeat] behavior and similar.
I have no problem writing the behaviors. It's a great feature of WCF but right now its looking like I'm going to have to add some method to the contract which I call such as Initialize which I'd lock down after start-up. Since, in this instance and more, the services are often external facing so we wish to avoid DoS attacks etc.
I've poked about client-side and can't see anyway and to be honest it kinda makes a degree of sense this functionality doesn't exist.
Can anybody offer any advice?

Prevent WCF exposing my whole class?

I've just begun learning WCF, and I'm coming from a total non-web background.
I have built a 3-tier desktop application, which compiles into one exe, which runs locally.
Now I want to move the whole business logics layer to a centric server, and make the GUI a client application.
As far as I understand, WCF should be my solution, as indeed, it helped me achieved what I wanted.
I mange to run remote functions, which is the basic of what I need.
My problem now, is that I don't quite understand the architecture.
For example, one of my services, returns a data type (class), from my Business Logics layer.
This class automatically becomes available to the client through the WCF mechanism.
But the problem is, this class contains some methods, which i definitely do not want to expose to the client.
For example a Save method (saves to the db).
Further more, sometimes I don't even want to allow the client to change all the properties of the class, since this class might be sent to one of my services.
I do not want to re-validate the class instance in the service.
What should I do? Should I build another layer, restricted version of the Business Logics, which I expose to the client? Or is there any way expose only part of my class to the client, without restricting the server it self?
I know this is a basic question, but honestly i've searched a lot before asking here. My problem is I don't quite know what to search.
My second question is then, do you have any recommendation for any resource that can explain me this architecture...?
Typically, if you want to encapsulate your business layer, you would not want to expose the business objects directly. This is because you now have a de-coupled client and you don't necessarily want to have to update the client every time the business logic/properties change.
This is where Data Transfer Objects (DTO) come into play nicely. Usually, you want to have control over your contract (data and methods) that you expose. Therefore, you would explicitly make other objects (DTOs) that make up the transfer layer. Then, you can safely change your client and server code independently (as long as both still fulfill the contract objects).
This usually requires a little more mapping (before you send or receive on each side) but it is often worth it.
For WCF, your interfaces and classes marked with [ServiceContract] and your classes marked with [DataContract] usually make up this transfer layer.
In WCF to expose method to client you have to mark it with OperationContractAttribute. So if you don't want clients to use your Save method, just don't mark them with with this attribute.
More info here: http://msdn.microsoft.com/en-us/library/system.servicemodel.servicecontractattribute.aspx
Pretty much same thing with properties, but different attribute: DataMemberAttribute. If you don't wont client to see it, just don't mark them with it (DataMember attribute)
But the problem is, this class contains some methods, which i definitely do not want to expose to the client.
Are you able to provide an example of your class and interface code? If so I'm sure you might be able to get more specific answers.
For example a Save method (saves to the db).
One possible approach would be to separate your class into 2 classes. Define the properties in the first class and then use that class as the base class of your second class. Then use the second class to define the methods. This would allow you to return only the properties while allowing you to keep your code DRY.
Further more, sometimes I don't even want to allow the client to change all the properties of the class, since this class might be sent to one of my services.
I do not want to re-validate the class instance in the service.
While you are able to define logic in the get and set methods for each property I would highly recommend revalidating any input received between services simply because any future changes or errors in one service could potentially lead to larger problems across your application. In addition this also helps to ensure your application is more secure against any potential attacks.
Should I build another layer, restricted version of the Business Logics, which I expose to the client? Or is there any way expose only part of my class to the client, without restricting the server it self?
I agree with the above answers that you should be able to limit access to the different properties and methods using the data and method attributes within your interfaces.
My second question is then, do you have any recommendation for any resource that can explain me this architecture...?
If you are looking for inexpensive but highly valuable video based training I've found the courses that Pluralsight offers to be quite good for both architecture as well as WFC services (btw, I am not associated with them, just enjoyed their training).

Questioning the use of DTOs with restful service and extracting behavior from update

In the realm of DDD I like the idea of avoiding getters and setters to fully encapsulate a component, so the only interaction that is allowed is the interaction which has been built through behavior. Combining this with Event Sourcing I can get a nice history of what has been actioned and when to a component.
One thing I have been thinking about is when I want to create, for example, a restful gateway to the underlying service. For the purposes of example, lets say I have a Task object with the following methods,
ChangeDueDate(DateTime date)
ChangeDescription(string description)
AddTags(params string[] tags)
Complete()
Now obviously I will have instance variables inside this object for controlling state and events which will be fired when the relevant methods are invoked.
Going back to the REST Service, the way I see it there are 3 options:
Make RPC style urls e.g. http://127.0.0.1/api/tasks/{taskid}/changeduedate
Allow for many commands to be sent to a single endpoint e.g.:
URL: http://127.0.0.1/api/tasks/{taskid}/commands
This will accept a list of commands so I could send the following in the same request:
ChangeDueDate command
ChangeDescription command
Make a truly RESTful verb available and I create domain logic to extract changes from a DTO and in turn translate into the relevant events required e.g.:
URL: http://127.0.0.1/api/tasks/{taskid}
I would use the PUT verb to send a DTO representation of a task
Once received I may give the DTO to the actual Task Domain Object through a method maybe called, UpdateStateFromDto
This would then analyse the dto and compare the matching properties to its fields to find differences and could have the relevant event which needs to be fired when it finds a difference with a particular property is found.
Looking at this now, I feel that the second option looks to be the best but I am wondering what other peoples thoughts on this are, if there is a known true restful way of dealing with this kind of problem. I know with the second option that it would be a really nice experience from a TDD point of view and also from a performance point of view as I could combine changes in behavior into a single request whilst still tracking change.
The first option would definitely be explicit but would result in more than 1 request if many behaviors needed to be invoked.
The third option does not sound bad to be but I realise it would require some thougth to come with a clean implementation that could account for different property types, nesting etc...
Thanks for your help in this, really bending my head through analysis paralysis. Would just like some advice on what others think would be the best way from the options or whether I am missing a trick.
I would say option 1. If you want your service to be RESTful then option 2 is not an option, you'd be tunneling requests.
POST /api/tasks/{taskid}/changeduedate is easy to implement, but you can also do PUT /api/tasks/{taskid}/duedate.
You can create controller resources if you want to group several procedures into one, e.g. POST /api/tasks/{taskid}/doThisAndThat, I would do that based on client usage patterns.
Do you really need to provide the ability to call any number of "behaviors" in one request? (does order matter?)
If you want to go with option 3 I would use PATCH /api/tasks/{taskid}, that way the client doesn't need to include all members in the request, only the ones that need to change.
Let's define a term: operation = command or query from a domain perspective, for example ChangeTaskDueDate(int taskId, DateTime date) is an operation.
By REST you can map operations to resource and method pairs. So calling an operation means applying a method on a resource. The resources are identified by URIs and are described by nouns, like task or date, etc... The methods are defined in the HTTP standard and are verbs, like get, post, put, etc... The URI structure does not really mean anything to a REST client, since the client is concerned with machine readable stuff, but for developers it makes easier to implement the router, the link generation, and you can use it to verify whether you bound URIs to resources and not to operations like RPC does.
So by our current example ChangeTaskDueDate(int taskId, DateTime date) the verb will be change and the nouns are task, due-date. So you can use the following solutions:
PUT /api{/tasks,id}/due-date "2014-12-20 00:00:00" or you can use
PATCH /api{/tasks,id} {"dueDate": "2014-12-20 00:00:00"}.
the difference that patch is for partial updates and it is not necessary idempotent.
Now this was a very easy example, because it is plain CRUD. By non CRUD operations you have to find the proper verb and probably define a new resource. This is why you can map resources to entities only by CRUD operations.
Going back to the REST Service, the way I see it there are 3 options:
Make RPC style urls e.g. http://example.com/api/tasks/{taskid}/changeduedate
Allow for many commands to be sent to a single endpoint e.g.:
URL: http://example.com/api/tasks/{taskid}/commands
This will accept a list of commands so I could send the following in the same request:
ChangeDueDate command
ChangeDescription command
Make a truly restful verb available and I create domain logic to extract changes from a dto and in turn translate into the relevant
events required e.g.:
URL: http://example.com/api/tasks/{taskid}
I would use the PUT verb to send a DTO representation of a task
Once received I may give the DTO to the actual Task Domain Object through a method maybe called, UpdateStateFromDto
This would then analyse the dto and compare the matching properties to its fields to find differences and could have the
relevant event which needs to be fired when it finds a difference with
a particular property is found.
The URI structure does not mean anything. We can talk about semantics, but REST is very different from RPC. It has some very specific constraints, which you have to read before doing anything.
This has the same problem as your first answer. You have to map operations to HTTP methods and URIs. They cannot travel in the message body.
This is a good beginning, but you don't want to apply REST operations on your entities directly. You need an interface to decouple the domain logic from the REST service. That interface can consist of commands and queries. So REST requests can be transformed into those commands and queries which can be handled by the domain logic.

Resolving a call-chain anti-pattern

I've begun to notice something of an anti-pattern in my ASP.NET development. It bothers me because it feels like the right thing to do to maintain good design, but at the same time it smells wrong.
The problem is this: we have a multi-layered application, the bottom layer is a class handling calls to a service that provides us with data. Above that is a layer of classes that possible transform, manipulate, and check the data. Above that are the ASP.NET pages.
In many cases, the methods from the the service layer don't need any changes before going on the view, so the model is just a straight pass through, like:
public List<IData> GetData(int id, string filter, bool check)
{
return DataService.GetData(id, filter, check);
}
It's not wrong, nor necessarily awful to work on, but it creates an odd kind of copy/paste dependency. I'm also working on the underlying service, and it also replicates this patter a lot, and there are interfaces throughout. So what happens is, "I need to add int someotherID to GetData" So I add it to the model, the service caller, the service itself, and the interfaces. It doesn't help that GetData is actually representative of several methods that all use the same signature but return different information. The interfaces help a bit with that repetition, but it still crops up here and there.
Is there a name for this anti-pattern? Is there a fix, or is a major change to the architecture the only real way? It sounds like I need to flatten my object model, but sometimes the data layer is doing transformations so it has value. I also like keeping my code separated between "calls an outside service" and "supplies page data."
I would suggest you use the query object pattern to resolve this. Basically, your service could have a signature like:
IEnumerable<IData> GetData(IQuery<IData> query);
Inside the IQuery interface, you could have a method that takes a unit of work as input, for example a transaction context or something like ISession if you are using an ORM such as NHibernate and returns a list of IData objects.
public interface IQuery<T>
{
IEnumerable<T> DoQuery(IUnitOfWork unitOfWork);
}
This way, you can create strongly typed query objects that match your requirements, and have a clean interface for your services. This article from Ayende makes good reading about the subject.
Sounds to me like you need another interface, so that the method becomes something like:
public List<IData> GetData(IDataRequest request)
You're delegating to another layer, and it's not necessarily a bad thing at all.
You could add some other logic here or in another method down the line, that belongs only in this layer, or swap out to having the layer delegated-to with another implementation, so it certainly could be perfectly good use of the layers in question.
You may have too many layers, but I wouldn't say so just from seeing this, more from not seeing anything else.
From what you've described it simply sounds like you have encountered one of the 'trade-offs' of abstraction in your application.
Consider the case where those 'call-chains' no longer 'pass-thru' the data but require some tranformation. It might not be needed now and certainly the case can be made for YAGNI.
However, in this case it doesn't seem like too much tech debt to handle with the positive side effect of being able to easily introduce changes to the data between layers.
I use this pattern as well. However I used it for the purpose of de-coupling my domain model objects from my data objects.
In my case, instead of "passing through" the object coming from the data layer as you do in your example, I "map" it to another object that lives in my domain layer. I use AutoMapper to take out the pain of manually doing it.
In most cases my domain object looks exactly the same as my data object that it originated from. However there are times when I need to flatten information coming from my data object... or I may not be interested in everything that is in my data object etc.. I map the data object to a customized domain object that only holds the fields my domain layer is interested in.
Also this has the side effect that when I decide to re factor or change my data-layer for something else, It does not have to affect my domain objects since they are de-coupled using the mapping technique.
Here is a description of auto-mapper, which is sort of what this design pattern tries to achieve I think:
AutoMapper is geared towards model projection scenarios to flatten complex object models to DTOs and other simple objects, whose design is better suited for serialization, communication, messaging, or simply an anti-corruption layer between the domain and application layer
Actually, the way you have chosen to go, is the reason of having what you have (I am not saying it is bad).
First, let me say your approach is quite normal.
Now, let me go thought your layers:
Your service - provides somewhat kind of strongly-typed access model. What that means is it has some types of arguments, used them in some special types of methods which return again some special type of results.
Your service-access-layer - also provides the same kind of model. So that it takes special kinds of arguments for special kinds of methods, returning special kinds of results.
etc...
In order not to confuse, here is what I call special kind:
public UserEntity GetUserByID(int userEntityID);
In this example you need to pass exactly the Int, while calling exactly the GetUserByID and it will return exactly the UserEntity object.
Now another kind of approach:
Remember how SqlDataReader works? not very strongly-typed, right?
What you call here for, in my opinion, is that you are missing some not-strongly typed layer.
For that to happen: you need to switch from strongly-typed to non-strongly typed somewhere in your layers.
Example:
public Entity SelectByID(IEntityID id);
public Entity SelectAll();
So, if you had something like this instead of the service access layer, then you could call it for whichever arguments you wanted.
But, that is almost creating an ORM of your own, so I would not think this is the best way to go.
It's essential to define what kind of responsibility goes to which layer, and place such logic only in the layer it belongs to.
It's absolutely normal to just pass through, if you don't have to add any logic in particular method. At some time you might need to do so, and abstraction layer will pay off at that point.
It's even better to have parallel hierarchies, not just passing the underlying layer's objects up, so each layer uses it's own class hierarchy, and you can employ something like AutoMapper in case you feel there's no much difference in the hierarches. This gives you flexibility, and you can always replace automapping with custom mapping code in particular methods/classes, in case hierarchies do not match anymore.
If you many methods with almost the same signature, then you should think of Query Specification pattern.
IData GetData(IQuery<IData> query)
Then, in presentation layer you can implement a databinder for your custom query specification objects, where a single aspnet handler could implement creation of specific query objects, and passing them to a single service method, which will pass it to a single repository method, where it can be dispatched according to a specific query class, possibly with a Visitor pattern.
IQuery<IData> BindRequest(IHttpRequest request)
With this to Automapping and Query Specification pattern, you can reduce duplication to a minimum.

what is the best practice to use for repository when you need to do expensive retrieval

Ok, lets say i have a DataRepository class with the methods, getNames() and getStates(). lets say this data is stored in a webservice or database that is an expensive operation.
once the first query is run and returned, when a consumer asked for these methods its returned immediately as the results are cached in the DataRepository class.
the issue is, for the first call you would want to behavior to be async to avoid blocking on this expensive call. What is the best way to code this? Is the fact that this DataRepository class is doing both actual cross boundry retrieving and caching breaking Single Respnosibility Principle.
any other thoughts on best practices here?
for the first call you would want to
behavior to be async to avoid blocking
on this expensive call. What is the
best way to code this?
That's a caller's concern. It's best to provide both synchronous and asynchronous interfaces so clients can decide which is appropriate for their situtation.
Is the fact that this DataRepository
class is doing both actual cross
boundry retrieving and caching
breaking Single Responsibility
Principle.
Yes it breaks the SRP if the repository class itself is involved in the retrieval and caching implementation. What's more, the decision about which source to hit usually requires significant logic, which is another good reason to separate these functions into different classes. (With the standard caveat: if YAGNI, then don't do it!)
Is it really the repository's responsibility of knowing if it is getting called async or not? I would think it would just make its call and return its data, how it is getting called is not its care. I also dont think it is its responsibility to store the data....if you want the data stored, the caller (some intermediary possibly) can store it. The repository should be pretty simple....ask for data and return data. Or even return IQueryable and let the piece that needs the data actually get the data...
If you have to make the first call async, you may as well make all calls async. This will make your code easier to write and understand. You won't have to deal with 2 different call patters.
The data repository should be responsible for one thing: Getting the data to the user. It retrieves the data from a data store. That data store could be the expensive call to WS or DB or the cheap call from cache. The data repository can check for the existence of the data in the cache and return that or get the data from the WS or DB, put it in the cache and then return it.

Categories