I have an Interface and in some month I want to add parameters to that interface.
I read somewhere (link went missing) that when I use Datacontracts I can easily add Properties to the datacontract. The new Properties will just not be sent to the server on old clients.
In Theory I just have one Interface and my new and old client can use that Interface. Did I understand that correct?
But Now I am working with the validation Block from Microsoft. Does that break my "feature" of having interfaces which are easy to maintain?
What is a good way of managing different version of interfaces with the validation block?
It isn't really clear on whether you mean changes to methods on ServiceContracts, or changes to data in DataContracts, however, there is a degree of non-breaking change compatability in both:
For Service Contracts, From MSDN:
Adding service operations exposed by the service is a nonbreaking change because existing clients need not be concerned about those new operations.
With the proviso:
Adding operations to a duplex callback contract is a breaking change.
Adding new parameters at the end existing method signatures may work for client calls from old versions, but would result in a default value for the type being passed - e.g. null for reference types, zero for numeric types, etc. This might break things and require additional validation (e.g. DateTime.MinValue wouldn't gel well with a Sql DateTime column.
Similarly, for DataContracts, from MSDN
In most cases, adding or removing a data member is not a breaking change, unless you require strict schema validity (new instances validating against the old schema).
New datamember properties would be defaulted, and obsolete / removed properties would be ignored.
You can also rename members using the Name property on DataMembers.
VAB would be subject to the same rules - i.e. any validations on new fields would need to be aware of the defaults provided, which would imply you couldn't validate new fields.
Doing changes like this retroactively is not a good idea once you have clients connecting to your services - it pays to design an interface right first time, and then to have a versioning strategy going forward, where you can provide a facade for older clients to connect to an old interface, which then actually transforms the old format to the new one, and makes deliberate mapping and defaulting decisions about missing or obsolete data.
Related
I have to apply [Serializable()] attribute for all classes, but I want to know is there any way to make classes Serializable globally instead of applying this attribute individually for all classes?
No, there isn't a way of applying this globally - you'd have to visit each type and add the attribute.
However: applying this globally is a really, really bad idea. Knowing exactly what you're serializing, when, and why is really important - whether this is for session-state, primary persistence, cache, or any other use-case. Statements like
I have to apply [Serializable()] attribute for all classes
tells me that you are not currently in control of what you are storing.
Additionally, since [Serializable] maps (usually) to BinaryFormatter, it is important to know that there are a lot of ways (when using BinaryFormatter) in which it is possible to accidentally drag unexpected parts of your model into the serialized data. The most notorious of these is "events", but: there are others.
When I see this type of question, what I envisage is that you're using types from your main data model as the thing that you are putting into session-state, but frankly: this is a mistake - and leads to questions like this. Instead, the far more maneagable approach is to create a separate model that exists purely for this purpose:
it only has the data that you need to have available in session
it is marked [Serializable] if your provider needs that - or whatever other metadata is needed for the sole purpose for which it exists
it does not have any events
it doesn't involve any tooling like ORM contexts, database connections etc
ideally it is immutable (to avoid confusion over what happens if you make changes locally, which can otherwise sometimes behave differently for in-memory vs persisted storage)
just plain simple basic objects - very easy to reason about
can be iterated separately to your main domain objects, so you don't have any unexpected breaks because you changed something innocent-looking in your domain model and it broke the serializer
We have a webservice that is used by a lot of other processes.
It takes an object (made from an XSD) as an argument. One of the properties (a datetime) in this object is now made nullable.
The question is: Do I now have to find all of the processes that reference this webservice and update their reference, in order for them to keep working?
This is a tricky question.
I am thinking you should be fine because you are not removing or add new parameters to the interface.
It is just a simple change to an existing parameter and in my opinion you are just relaxing the constraint here. Instead of enforcing the parameter to not able to accept null, you are saying it now is.
I believe existing processes must have already be setting non-nullable value for that dateTime property? So for new processes to take advantage of the change, they will have to update the reference, otherwise no change is required.
Still, changing service contract is generally a bad idea though. Have you look at including the change in your release notes? So that your clients are aware and can do the appropriate measures.
Here is another list of breaking changes that might give you trouble.
Remove operations
Change operation name
Remove operation parameters
Add operation parameters
Change an operation parameter name or data type
Change an operation's return value type
Change the serialized XML format for a parameter type (data contract) or operation (message contract) by explicitly using .NET attributes or custom serialization code
Modify service operation encoding formats (RPC Encoding vs. Document Literal)
Changing a service contract, if only making a property nullable from non-nullable requires the service references to be updated.
Rather than each project to uses the service to create its own references, you could create a shared project where you maintain a service reference. That way, you do not need to go through all your projects and applications and go through this process for each and every one of them.
A better solution still is to have your POCOs defined in a separate project/assembly, and reference that at both the service and the client. WCF and VS are smart enough the identify that it does not have to create proxy classes for the service classes, and instead will use the POCOs from the separate assembly. You even wouldn't have to update the service reference if you change a property in a class that is exposed by the service, only when you add/remove classes or change the service interface.
I feel abit embarrassed asking about this. Cant seem to find it described anywhere else though...
Say, we have a webservice method, StoreNewItem(Item item), that takes in a datacontract with all the properties for the item.
We will insert this new item in a database.
Some of the properties are mandatory, and some of these are boolean.
Should we validate the incomming data, i.e. verify that the mandatory fields actually have valid data, or should this be the responsibility of the client calling the webservice?
If yes, how to handle the boolean properties? The client may well ignore them, and they will be stored as false in db, as we have no way of knowing if they where set to false or just ignored/forgotten by the client.
Is it a valid option to use an enum with True, False and Empty instead of bool as a type for these mandatory properties?
Or is this simply not our problem?
All thoughts are welcome!
Instead of enums, you can use nullable booleans (bool?) which are fully supported by web services.
IMHO Your checking logic should at least be in the db which can forward the error to the service layer (which in turns should raise a fault). I'd have it at the service level too though so that the error can be raised before hitting the db (validation is part of the business layer too). Having it in the UI too is nice but not mandatory.
Never assume your clients send you valid data.
Definitely validate the data. Malicious entities could easily replicate your clients.
it depends by your business rule.
you could use optional parameter if you want to allow the use not to pass some parameters but you want them got a default value
void MyServiceMethod(bool CanDoIt=false,int somethingElse)
or you can make your service get nullable value if you want allow the user not to pass all the parameter using null value(if your business rule can allow that)
void MyServiceMethod(Nullable<bool> canDoItfalse,int somethingElse)
In general you should always validate the Data on the Service side and return a service fault data contract in the case the validation fails
more info at
http://msdn.microsoft.com/en-us/library/ms752208.aspx
If no external third party will be accessing the web service (used only in-house), you can get away with not validating in the service. Personally, I wouldn't do that though; it's too easy to have bad data sent to the service. Plus, then you would have to duplicate all the validation logic across all clients. So validating in the service is a must in my opinion.
As far as booleans, you can use nullable booleans (bool?).
In some projects I see that a dummy record is needed to create in Db in order to keep the business logic go on without breaking the Db constraints.
So far I have seen its usage in 2 ways:
By adding a field like IsDummy
By adding a field something called ObjectType which points a type: Dummy
Ok, it helps on what needs to be achieved.
But what makes me feel alert on such solutions is sometimes you have to keep in mind that some dummy records exist in the application which needs to be handled in some processes. If not, you face some problems until you realize their existence or until someone in the team tells you "Aha! You have forgotten the dummy records. You should also do..."
So the question is:
Is it a good idea to create dummy records to keep business logic as it is without making the Db complain? If yes, what is the best practice to prevent developers from skipping their existence? If not, what do you do to prevent yourself from falling in a situation where you end up with an only option of creating a dummy record?
Thanks!
Using dummy records is inferior to getting the constraints right.
There's often a temptation to use them because using dummy records can seem like the fastest way to deliver a new feature (and maybe sometimes it is), but they are never part of a good design, because they hide differences between your domain logic and data model.
Dummy records are only required when the modeller cannot easily change the Database Definition, which means the definition and/or the data model is pretty bad. One should never end up in a situation where there has to be special code in the app layer to handle special cases in the database. That is a guaranteed maintenance nightmare.
Any good definition or model will allow changes easily, without "affecting existing code".
All business logic [that is defined in the Database] should be implemented using ANSI SQL Constraints, Checks, and Rules. (Of course Lower level structures are already constrained via Domains/Datatypes, etc., but I would not classify them as "business rules".) I ensure that I don't end up having to implement dummies, simply by doing that.
If that cannot be done, then the modeller lacks knowledge and experience. Or higher level requirements such as Normalisation, have been broken, and that presents obstacles to implementing Constraints which are dependent on them; also meaning the modeller failed.
I have never needed to break such Constraints, or add dummy records (and I have worked on an awful lot of databases). I have removed dummy records (and duplicates) when I have reworked databases created by others.
I've never run across having to do this. If you need to do this, there's something wrong with your data structure, and it's going to cause problems further down the line for reporting...
Using Dummies is dumb.
In general you should aim to get your logic right without them. I have seen them used too, but only as an emergency solution. Your description sounds way too much like making it a standard practice. That would cause more problems than it solves.
The only reason I can see for adding "dummy" records is when you have a seriously bad app and database design.
It is most definitely not common practice.
If your business logic depends on a record existing then you need to do one of two things: Either make sure that a CORRECT record is created prior to executing that logic; or, change the logic to take missing information into account.
I think any situation where something isn't very easily distinguishable as "business-logic" is a cause for trying to think of a better way.
The fact that you mention "which points a type: Dummy" leads me to believe you are using some kind of ORM for handling your data access. A very good checkpoint (though not the only) for ORM solutions like NHibernate is that your source code VERY EXPLICITLY describes your data structures driving your application. This not only allows your data access to easily be managed under source control, but it also allows for easier debugging down the line should a problem occur (and let's face it, it's not a matter of IF a problem will occur, but WHEN).
When you introduce some kind of "crutch" like a dummy record, you are ignoring the point of a database. A database is there to enforce rules against your data, in an effort to ELIMINATE the need for this kind of thing. I recommend you take a look at your application logic FIRST, before resorting to this kind of technique. Think about your fellow dev's, or a new hire. What if they need to add a feature and forget your little "dummy record" logic?
You mention yourself in your question feeling apprehension. Go with your gut. Get rid of the dummy records.
I have to go with the common feeling here and argue against dummy records.
What will happen is that a new developer will not know about them and not code to handle them, or delete a table and forget to add in a new dummy record.
I have experienced them in legacy databases and have seen both of the above mentioned happen.
Also the longer they exist the harder it is to take them out and the more code you have to write to take into account these dummy records which could probably have been removed if you just did the original design without them.
The correct solution would be to update your business logic.
To quote your expanded explanation:
Assume that you have a Package object and you have implement a business logic that a Package without any content cannot be created. YOu created some business layer rules and designed your Db with relevant constraints. But after some years a new feature is requested and to accomplish that you have to be able to create a package without a contnent. To overcome this, you decide to create a dummy content which is not visible on UI but lets you to create an empty package.
So the at one time to a package w/o content was invalid thus business layer enforced existence of content in a package object. That makes sense. Now if the real world scenario has changed such there is now a need VALID reason to create Package objects without content it is the business logic layer which needs to be changed.
Almost universally using "dummy" anything anywhere is a bad idea and usually indicates an issue in implementation. In this instance you are using dummy data to allow "compliance" with a business layer which is no longer accurately representing the real world constraints of the business.
If package without content is not valid then dummy data to allow "compliance" with business layer is a foolish hack. In essence you wrote rules to protect your own system and then how are attempting to circumvent your own protection. On the other hand if package without content is valid then business layer shouldn't be enforcing bogus constraints. In neither instance is dummy data valid.
When designing a class, should logic to maintain valid state be incorporated in the class or outside of it ? That is, should properties throw exceptions on invalid states (i.e. value out of range, etc.), or should this validation be performed when the instance of the class is being constructed/modified ?
It belongs in the class. Nothing but the class itself (and any helpers it delegates to) should know, or be concerned with, the rules that determine valid or invalid state.
Yes, properties should check on valid/invalid values when being set. That's what it's for.
It should be impossible to put a class into an invalid state, regardless of the code outside it. That should make it clear.
On the other hand, the code outside it is still responsible for using the class correctly, so frequently it will make sense to check twice. The class's methods may throw an ArgumentException if passed something they don't like, and the calling code should ensure that this doesn't happen by having the right logic in place to validate input, etc.
There are also more complex cases where there are different "levels" of client involved in a system. An example is an OS - an application runs in "User mode" and ought to be incapable of putting the OS into an invalid state. But a driver runs in "Kernel mode" and is perfectly capable of corrupting the OS state, because it is part of a team that is responsible for implementing the services used by the applications.
This kind of dual-level arrangement can occur in object models; there can be "exterior" clients of the model that only see valid states, and "interior" clients (plug-ins, extensions, add-ons) which have to be able to see what would otherwise be regarded as "invalid" states, because they have a role to play in implementing state transitions. The definition of invalid/valid is different depending on the role being played by the client.
Generally this belongs in the class itself, but to some extent it has to also depend on your definition of 'valid'. For example, consider the System.IO.FileInfo class. Is it valid if it refers to file that no longer exists? How would it know?
I would agree with #Joel. Typcially this would be found in the class. However, I would not have the property accessors implement the validation logic. Rather I'd recommend a validation method for the persistence layer to call when the object is being persisted. This allows you to localize the validation logic in a single place and make different choices for valid/invalid based on the persistence operation being performed. If, for example, you are planning to delete an object from the database, do you care that some of its properties are invalid? Probably not -- as long as the ID and row versions are the same as those in the database, you just go ahead and delete it. Likewise, you may have different rules for inserts and updates, e.g., some fields may be null on insert, but required on update.
It depends.
If the validation is simple, and can be checked using only information contained in the class, then most of the time it's worth while to add the state checks to the class.
There are sometimes, however, where it's not really possible or desirable to do so.
A great example is a compiler. Checking the state of abstract syntax trees (ASTs) to make sure a program is valid is usually not done by either property setters or constructors. Instead, the validation is usually done by a tree visitor, or a series of mutually recursive methods in some sort of "semantic analysis class". In either case, however, properties are validated long after their values are set.
Also, with objects used to old UI state it's usually a bad idea (from a usability perspective) to throw exceptions when invalid values are set. This is particularly true for apps that use WPF data binding. In that case you want to display some sort of modeless feedback to the customer rather than throwing an exception.
The class really should maintain valid values. It shouldn't matter if these are entered through the constructor or through properties. Both should reject invalid values. If both a constructor parameter and a property require the same validation, you can either use a common private method to validate the value for both the property and the constructor or you can do the validation in the property and use the property inside your constructor when setting the local variables. I would recommend using a common validation method, personally.
Your class should throw an exception if it receives invalid values. All in all, good design can help reduce the chances of this happening.
The valid state in a class is best express with the concept of class invariant. It is a boolean expression which must hold true for the objects of that class to be valid.
The Design by Contract approach suggests that you, as a developer of class C, should guarantee that the class invariant holds:
After construction
After a call to a public method
This will imply that, since the object is encapsulated (noone can modify it except via calls to public methods), the invariant will also be satisfied at entering any public method, or at entering the destructor (in languages with destructors), if any.
Each public method states preconditions that the caller must satisfy, and postconditions that will be satisfied by the class at the end of every public method. Violating a precondition effectively violates the contract of the class, so that it can still be correct but it doesn't have to behave in any particular way, nor maintain the invariant, if it is called with a precondition violation. A class that fulfills its contract in the absence of caller violations can be said to be correct.
A concept different from correct but complementary to it (and certainly belonging to the multiple factors of software quality) is that of robust. In our context, a robust class will detect when one of its methods is called without fulfilling the method preconditions. In such cases, an assertion violation exception will typically be thrown, so that the caller knows that he blew it.
So, answering your question, both the class and its caller have obligations as part of the class contract. A robust class will detect contract violations and spit. A correct caller will not violate the contract.
Classes belonging to the public interface of a code library should be compiled as robust, while inner classes could be tested as robust but then run in the released product as just correct, without the precondition checks on. This depends on a number of things and was discussed elsewhere.