I feel abit embarrassed asking about this. Cant seem to find it described anywhere else though...
Say, we have a webservice method, StoreNewItem(Item item), that takes in a datacontract with all the properties for the item.
We will insert this new item in a database.
Some of the properties are mandatory, and some of these are boolean.
Should we validate the incomming data, i.e. verify that the mandatory fields actually have valid data, or should this be the responsibility of the client calling the webservice?
If yes, how to handle the boolean properties? The client may well ignore them, and they will be stored as false in db, as we have no way of knowing if they where set to false or just ignored/forgotten by the client.
Is it a valid option to use an enum with True, False and Empty instead of bool as a type for these mandatory properties?
Or is this simply not our problem?
All thoughts are welcome!
Instead of enums, you can use nullable booleans (bool?) which are fully supported by web services.
IMHO Your checking logic should at least be in the db which can forward the error to the service layer (which in turns should raise a fault). I'd have it at the service level too though so that the error can be raised before hitting the db (validation is part of the business layer too). Having it in the UI too is nice but not mandatory.
Never assume your clients send you valid data.
Definitely validate the data. Malicious entities could easily replicate your clients.
it depends by your business rule.
you could use optional parameter if you want to allow the use not to pass some parameters but you want them got a default value
void MyServiceMethod(bool CanDoIt=false,int somethingElse)
or you can make your service get nullable value if you want allow the user not to pass all the parameter using null value(if your business rule can allow that)
void MyServiceMethod(Nullable<bool> canDoItfalse,int somethingElse)
In general you should always validate the Data on the Service side and return a service fault data contract in the case the validation fails
more info at
http://msdn.microsoft.com/en-us/library/ms752208.aspx
If no external third party will be accessing the web service (used only in-house), you can get away with not validating in the service. Personally, I wouldn't do that though; it's too easy to have bad data sent to the service. Plus, then you would have to duplicate all the validation logic across all clients. So validating in the service is a must in my opinion.
As far as booleans, you can use nullable booleans (bool?).
Related
We have a webservice that is used by a lot of other processes.
It takes an object (made from an XSD) as an argument. One of the properties (a datetime) in this object is now made nullable.
The question is: Do I now have to find all of the processes that reference this webservice and update their reference, in order for them to keep working?
This is a tricky question.
I am thinking you should be fine because you are not removing or add new parameters to the interface.
It is just a simple change to an existing parameter and in my opinion you are just relaxing the constraint here. Instead of enforcing the parameter to not able to accept null, you are saying it now is.
I believe existing processes must have already be setting non-nullable value for that dateTime property? So for new processes to take advantage of the change, they will have to update the reference, otherwise no change is required.
Still, changing service contract is generally a bad idea though. Have you look at including the change in your release notes? So that your clients are aware and can do the appropriate measures.
Here is another list of breaking changes that might give you trouble.
Remove operations
Change operation name
Remove operation parameters
Add operation parameters
Change an operation parameter name or data type
Change an operation's return value type
Change the serialized XML format for a parameter type (data contract) or operation (message contract) by explicitly using .NET attributes or custom serialization code
Modify service operation encoding formats (RPC Encoding vs. Document Literal)
Changing a service contract, if only making a property nullable from non-nullable requires the service references to be updated.
Rather than each project to uses the service to create its own references, you could create a shared project where you maintain a service reference. That way, you do not need to go through all your projects and applications and go through this process for each and every one of them.
A better solution still is to have your POCOs defined in a separate project/assembly, and reference that at both the service and the client. WCF and VS are smart enough the identify that it does not have to create proxy classes for the service classes, and instead will use the POCOs from the separate assembly. You even wouldn't have to update the service reference if you change a property in a class that is exposed by the service, only when you add/remove classes or change the service interface.
Is it possible to have one single endpoint and be able to receive two different objects (but just one, either object1 or object2)?
I don't really mind how it would end up looking in code, all I care is the calling user to be able to call the same endpoint with either of the object types, and well, obviously a way to know which object i'm getting by either having two separate methods or if is the same a way to know which one was sent.
I am not sure how to be more specific, or if there's something else I should mention. Let me know and I'll edit if that is the case.
Can it? Absolutely. Should it? Nope.
REST describes endpoints as having definitive actions based on their inputs. If you were interacting with a Customer endpoint, it wouldn't make very much sense for that endpoint to also consume Dog! There should be a level of abstraction that allows you to consume a type for the specific purpose you intend, even if that purpose is "log the name of this object and the sound it makes" (eg: "Woof" and "Tacos"), perhaps a Recorder endpoint.
I have an Interface and in some month I want to add parameters to that interface.
I read somewhere (link went missing) that when I use Datacontracts I can easily add Properties to the datacontract. The new Properties will just not be sent to the server on old clients.
In Theory I just have one Interface and my new and old client can use that Interface. Did I understand that correct?
But Now I am working with the validation Block from Microsoft. Does that break my "feature" of having interfaces which are easy to maintain?
What is a good way of managing different version of interfaces with the validation block?
It isn't really clear on whether you mean changes to methods on ServiceContracts, or changes to data in DataContracts, however, there is a degree of non-breaking change compatability in both:
For Service Contracts, From MSDN:
Adding service operations exposed by the service is a nonbreaking change because existing clients need not be concerned about those new operations.
With the proviso:
Adding operations to a duplex callback contract is a breaking change.
Adding new parameters at the end existing method signatures may work for client calls from old versions, but would result in a default value for the type being passed - e.g. null for reference types, zero for numeric types, etc. This might break things and require additional validation (e.g. DateTime.MinValue wouldn't gel well with a Sql DateTime column.
Similarly, for DataContracts, from MSDN
In most cases, adding or removing a data member is not a breaking change, unless you require strict schema validity (new instances validating against the old schema).
New datamember properties would be defaulted, and obsolete / removed properties would be ignored.
You can also rename members using the Name property on DataMembers.
VAB would be subject to the same rules - i.e. any validations on new fields would need to be aware of the defaults provided, which would imply you couldn't validate new fields.
Doing changes like this retroactively is not a good idea once you have clients connecting to your services - it pays to design an interface right first time, and then to have a versioning strategy going forward, where you can provide a facade for older clients to connect to an old interface, which then actually transforms the old format to the new one, and makes deliberate mapping and defaulting decisions about missing or obsolete data.
I'm here to hear your thoughts on the approach we have taken to validation so far. We're still early in the development process so we can still change it. Validation is very important for this application and our clients, so we need to find the most optimal way. Let me describe what we have done so far...
We're building this application that is going to be consumed by different clients. We do not control all the clients and so there are strict requirements to validation in all layers. We do control some of the client applications, one being a WPF application used by ~100 users. From this application, the workflow is as follows:
| Client | Backend Service |
ViewModel -> ClientRepository -> ServiceClient -> Service (WCF) -> ApplicationService -> DomainModel -> Repository -> Database
We see the following as candidates for performing validation.
Client: ViewModel validation, for supporting the UI with required fields, lengths, etc.
Backend: Service request DTO validation, because we can't rely on the clients to always supply 100% valid values.
Backend: Domain model entity validation. We do not want our entities to ever end up in a invalid state, and therefore each entity will contain different checks when operations are performed.
Backend: Database validation, such as failing constraints (FK, uniqueness, lenghts, etc.)
The clients ViewModel validation is pretty obvious and for our own clients, as many errors as possible should be corrected there before reaching the service. Can't speak for other applications consuming our service though, and the worst should be assumed.
Service request DTO's should be validated primarily for the case of third party applications and mistakes in our own client. Ensuring that the request is correct, can prevent the error popping up later while processing the request, thus ensuring a more effective service. Like the ViewModel validation, this comes down to required fields, lengths and formats (e.g. email) of the different properties.
The entities in the domain model should themselves ensure that they will always have completely valid attributes/properties, we are achieving this like this, taking the Customer entity as an example.
public class Customer : Entity
{
private Customer() : base() { }
public Customer(Guid id, string givenName, string surname)
: this(id, givenName, null, surname) { }
public Customer(Guid id, string givenName, string middleName, string surname)
: base(id)
{
if (string.IsNullOrWhiteSpace(givenName))
throw new ArgumentException(GenericErrors.StringNullOrEmpty, "givenName");
if (string.IsNullOrWhiteSpace(surname))
throw new ArgumentException(GenericErrors.StringNullOrEmpty, "surname");
GivenName = givenName.Trim();
Surname = surname.Trim();
if (!string.IsNullOrWhiteSpace(middleName))
MiddleName = middleName.Trim();
}
}
Now while this ensures that the attributes are valid, a CustomerValidator class validates the Customer class as a whole, ensuring that it is in a valid state and not only has valid attributes. The CustomerValidator is implemented using the FluentValidation framework. It is called in the application service before committing the customer object to the database.
What do you think of our approach so far?
What I am a bit concerned about, is the usage of exceptions being thrown all over the place. E.g. ArgumentException the example above, but also InvalidOperationException in case of the call to some method that is not permitted in the current state of the object.
Hopefully, these exception will be thrown very rarely, because the service request DTO is validated, and therefore I'm thinking that it might be okay? For example when the service request DTO is validated, argument exceptions should never be raised, unless there is an error somewhere in the validation. Thus you can say that these argument checks in the domain model acts as an extra layer of security. InvalidOperationException on the other hand, can be raised if the client calls a service method that calls a method on the Customer object that is unavailable in its current state (and thus it should fail).
What do you think? If it all sounds okay, how can I appropriately inform the user through WCF when something fails? Be it an ArgumentException, InvalidOperationException, or an exception containing a list of errors (thrown by the ApplicationService after validation the customer object using the CustomerValidator class). Should I somehow catch all these exceptions and turn them into some general fault exception thrown by WCF and thus the client can react to it and inform the user what happened?
I'm interested in hearing your thoughts on our approach. We're in the beginning of building this rather large application, and we really want to find a good way of performing validation. There are some really critical parts in our application where the data correctness is very important, so validation is important!
My own opinion is that the domain consistency should be handled by the domain. So no need for CustomerValidator of sorts.
As for exceptions, you should consider that, ArgumentNullException apart, they should be terms of the ubiquitous language (for a deeper explanation see http://epic.tesio.it/2013/03/04/exceptions-are-terms-ot-the-ubiquitous-language.html).
BTW, even if all your DTO have been previously validated, you should never remove the proper validation from the domain. Business invariants are its own responsibility.
As for performance: exceptions have a computational cost, but in most DDD scenarios that I saw till now, they are not a problem. In particular they are not a problem when the commands come from human beings.
edit
validation is always responsability of the domain. Take an ISIN value object: it's up to its constructor to ensure its own invariants by throwing proper exceptions. In a well coded domain, you can't hold an instance of an invalid object. Thus you don't need any validator to cumulate errors.
In the same way, the factories can ensure business invariants if and only if they are the only way to obtain the instance. Technological invariants, such as db column lenght, should be out of the domain, thus a factory could be a good location for them. This would also have the advantage of enabling exceptions' chaining: the SqlExceptions are not much expressive for clients.
With expressive exceptions clients just have to try/catch the exceptions they can handle (and remember that presenting an exception to the user is a way to handle it).
There is a point at which data comes into your system. That may be in the form of action arguments (view model) or parameters in a services layer.
You must always hyper-validate at this point (make everything nullable then dis-allow nulls, check for negative numbers on integers, etc) and garuntee that everything is 100% correct at these entry points. Then the rest of your system doesn't have to worry about it unless the validation is for a particular edge case.
Client validation is nice, but you must never rely on it completely. There can be a disconnect in the validation at times. Also, there's not a promise that the clients that are calling your actions are the clients you think they are (e.g. we've all changed a query parameter in a url to see what happens).
My problem with the code that you posted is the fact that there is, at that point, data in your domain that may or may not be valid. If you always perform validations at the external bounds of your process, you never have to worry. Also, you never end up wondering, "Where did I place that validation?"
In the realm of DDD I like the idea of avoiding getters and setters to fully encapsulate a component, so the only interaction that is allowed is the interaction which has been built through behavior. Combining this with Event Sourcing I can get a nice history of what has been actioned and when to a component.
One thing I have been thinking about is when I want to create, for example, a restful gateway to the underlying service. For the purposes of example, lets say I have a Task object with the following methods,
ChangeDueDate(DateTime date)
ChangeDescription(string description)
AddTags(params string[] tags)
Complete()
Now obviously I will have instance variables inside this object for controlling state and events which will be fired when the relevant methods are invoked.
Going back to the REST Service, the way I see it there are 3 options:
Make RPC style urls e.g. http://127.0.0.1/api/tasks/{taskid}/changeduedate
Allow for many commands to be sent to a single endpoint e.g.:
URL: http://127.0.0.1/api/tasks/{taskid}/commands
This will accept a list of commands so I could send the following in the same request:
ChangeDueDate command
ChangeDescription command
Make a truly RESTful verb available and I create domain logic to extract changes from a DTO and in turn translate into the relevant events required e.g.:
URL: http://127.0.0.1/api/tasks/{taskid}
I would use the PUT verb to send a DTO representation of a task
Once received I may give the DTO to the actual Task Domain Object through a method maybe called, UpdateStateFromDto
This would then analyse the dto and compare the matching properties to its fields to find differences and could have the relevant event which needs to be fired when it finds a difference with a particular property is found.
Looking at this now, I feel that the second option looks to be the best but I am wondering what other peoples thoughts on this are, if there is a known true restful way of dealing with this kind of problem. I know with the second option that it would be a really nice experience from a TDD point of view and also from a performance point of view as I could combine changes in behavior into a single request whilst still tracking change.
The first option would definitely be explicit but would result in more than 1 request if many behaviors needed to be invoked.
The third option does not sound bad to be but I realise it would require some thougth to come with a clean implementation that could account for different property types, nesting etc...
Thanks for your help in this, really bending my head through analysis paralysis. Would just like some advice on what others think would be the best way from the options or whether I am missing a trick.
I would say option 1. If you want your service to be RESTful then option 2 is not an option, you'd be tunneling requests.
POST /api/tasks/{taskid}/changeduedate is easy to implement, but you can also do PUT /api/tasks/{taskid}/duedate.
You can create controller resources if you want to group several procedures into one, e.g. POST /api/tasks/{taskid}/doThisAndThat, I would do that based on client usage patterns.
Do you really need to provide the ability to call any number of "behaviors" in one request? (does order matter?)
If you want to go with option 3 I would use PATCH /api/tasks/{taskid}, that way the client doesn't need to include all members in the request, only the ones that need to change.
Let's define a term: operation = command or query from a domain perspective, for example ChangeTaskDueDate(int taskId, DateTime date) is an operation.
By REST you can map operations to resource and method pairs. So calling an operation means applying a method on a resource. The resources are identified by URIs and are described by nouns, like task or date, etc... The methods are defined in the HTTP standard and are verbs, like get, post, put, etc... The URI structure does not really mean anything to a REST client, since the client is concerned with machine readable stuff, but for developers it makes easier to implement the router, the link generation, and you can use it to verify whether you bound URIs to resources and not to operations like RPC does.
So by our current example ChangeTaskDueDate(int taskId, DateTime date) the verb will be change and the nouns are task, due-date. So you can use the following solutions:
PUT /api{/tasks,id}/due-date "2014-12-20 00:00:00" or you can use
PATCH /api{/tasks,id} {"dueDate": "2014-12-20 00:00:00"}.
the difference that patch is for partial updates and it is not necessary idempotent.
Now this was a very easy example, because it is plain CRUD. By non CRUD operations you have to find the proper verb and probably define a new resource. This is why you can map resources to entities only by CRUD operations.
Going back to the REST Service, the way I see it there are 3 options:
Make RPC style urls e.g. http://example.com/api/tasks/{taskid}/changeduedate
Allow for many commands to be sent to a single endpoint e.g.:
URL: http://example.com/api/tasks/{taskid}/commands
This will accept a list of commands so I could send the following in the same request:
ChangeDueDate command
ChangeDescription command
Make a truly restful verb available and I create domain logic to extract changes from a dto and in turn translate into the relevant
events required e.g.:
URL: http://example.com/api/tasks/{taskid}
I would use the PUT verb to send a DTO representation of a task
Once received I may give the DTO to the actual Task Domain Object through a method maybe called, UpdateStateFromDto
This would then analyse the dto and compare the matching properties to its fields to find differences and could have the
relevant event which needs to be fired when it finds a difference with
a particular property is found.
The URI structure does not mean anything. We can talk about semantics, but REST is very different from RPC. It has some very specific constraints, which you have to read before doing anything.
This has the same problem as your first answer. You have to map operations to HTTP methods and URIs. They cannot travel in the message body.
This is a good beginning, but you don't want to apply REST operations on your entities directly. You need an interface to decouple the domain logic from the REST service. That interface can consist of commands and queries. So REST requests can be transformed into those commands and queries which can be handled by the domain logic.