I want to create an interface or base class (not sure I want to go this route) for all my business entities. For each business entity I need the following:
Id - primary key of the entity
Type - type of the entity, e.g. User, just a string
Name - name of the entity, e.g. John Doe
Description - short description of the entity, e.g. Senior Programmer
CreatedDate - date the entity was created
ModifiedDate - date the entity was modified
All classes support a single primary key.
Most of my classes have these fields, though in most cases, the primary key would be something like UserId.
One of the reasons I want to create some commonality in my business entities is I want implement a search function that returns a list of IEntity (or Entity class, if leveraging inheritance) objects.
My questions are ...
Is is the more correct way to leverage an interface as opposed to a base class?
If I do create this as an interface should I keep the property simples, e.g. Id and Name ... which would minimize me having to code each property implementation OR is it better to append "Entity" to each proper name so it's easier to work with the business entity, e.g. MyEntity.EntityId verses MyEntity.EntityId
I realize this could be considered subjective, but I really need to get some guidance on this, so any ideas to make this not be so subjective would be much appreciated.
Thanks in advance!
In my opinion...
If your classes are going to have some common implementation of some of their methods, then a base class makes more sense. Because you can't implement inside an interface, and if you were to implement an interface, you'd have the same common implementation in multiple classes, instead of a single base class.
I think appending "Entity" to each property is pointless. You already imply that it's an entity property by either the name of the entity object or its underlying type. I say avoid redundancy and keep it simple.
In my opinion, if you want many objects to have this functionality, you should avoid base-class inheritance at all costs. Once you decide that you're gonna inherit all of the classes in your project from a certain base-class, it's hard to go back. Remember, C# only lets you have single-inheritance.
A better solution might be to implement an interface which lets classes specify the properties they have to anyone who might be interested in those data.
Another reason to avoid base-classing is that it's going to be harder to unit-test, if your'e interested in that. It's also going to be hard to change custom behaviors without affecting many areas of your application.
In short, what you can do is have objects which you have clearly recognized as needing that interface implement that interface, and have another manager-type class ask for that information from those other classes, and be the adapter or gateway between your modular, single-purpose objects, and a database (or something like that).
Hope I've made myself clear enough.
Consider whether it would be better to keep the business data as isolated classes in your data access layer, and provide a common wrapper in your presentation layer that provides the common feature set you're thinking about. Maybe your solution isn't complicated enough to warrant a fully-tiered architecture - which I'm sure quite a few people would disagree with - but I feel that making your application tiered is a good approach. This means that the data access classes get to be seperate, avoiding the conundrum altogether at this tier, and the presentation class(es) only expose the functionality you actually need - but take on whatever inheritance regime you choose. My reasoning is that considering the problem in this way might make it easier to decide.
Related
I have a bunch of classes in our data access layer who speak with database tables (part of ORM solution). Now, I want for my classes to provide some data about that for automation and documentation purposes. Something like - Table name, Primary key name, Description column, etc... Of course, valuable information is just to differentiate data access classes from the rest of them.
Not sure how to implement this. If I go with static properties then I'm forced to use reflection and it's difficult to check whether class has information or not and there is an issue that programmer might include some information but not all which would break things.
If I go with interfaces then I need to create object from class to extract data and classes have no uniform constructor to do that. Interface has no way to define a necessary constructor.
Tips?
I have some integrations (like Salesforce) that I would like to hide behind a product-agnostic wrapper (like a CrmService class instead of SalesforceService class).
It seems simple enough that I can just create a CrmService class and use the SalesforceService class as an implementation detail in the CrmService, however, there is one problem. The SalesforceService uses some exceptions and enums. It would be weird if my CrmService threw SalesforceExceptions or you were required to use Salesforce enums.
Any ideas how I can accomplish what I want cleanly?
EDIT: Currently for exceptions, I am catching the Salesforce one and throwing my own custom one. I'm not sure what I should do for the enums though. I guess I could map the Salesforce enums to my own provider-agnostic ones, but I'm looking for a general solution that might be cleaner than having to do this mapping. If that is my only option (to map them), then that is okay, just trying to get ideas.
The short answer is that you are on the right track, have a read through the Law of Demeter.
The fundamental notion is that a given object should assume as
little as possible about the structure or properties of anything else
(including its subcomponents), in accordance with the principle of
"information hiding".
The advantage of following the Law of Demeter is that the resulting
software tends to be more maintainable and adaptable. Since objects
are less dependent on the internal structure of other objects, object
containers can be changed without reworking their callers.
Although it may also result in having to write many wrapper
methods to propagate calls to components; in some cases, this can
add noticeable time and space overhead.
So you see you are following quite a good practise which I do generally follow myself, but it does take some effort.
And yes you will have to catch and throw your own exceptions and map enums, requests and responses, its a lot of upfront effort but if you ever have to change out Salesforce in a few years you will be regarded a hero.
As with all things software development, you need to way up the effort versus the benefit you will gain, if you think you are likely never to change out salesforce? then is it really needed? ... for you to decide.
To make use of good OOP practices, I would create a small interface ICrm with the basic members that all your CRM's have in common. This interface will include the typical methods like MakePayment(), GetPayments(), CheckOrder(), etc. Also create the Enums that you need like OrderStatus or ErrorType, for example.
Then create and implement your specific classes implementing the interface, e.g. class CrmSalesForce : ICrm. Here you can convert the specific details to this particular CRM (SalesForce in that case) to your common ICrm. Enums can be converted to string and the other way around if you have to (http://msdn.microsoft.com/en-us/library/kxydatf9(v=vs.110).aspx).
Then, as a last step, create your CrmService class and use in it Dependency Injection (http://msdn.microsoft.com/en-us/library/ff921152.aspx), that's it, pass a type of ICrm as a parameter in its constructor (or methods if you prefer to) . That way you keep your CrmService class quite cohesive and independent, so you create and use different Crm's without the need to change most of your code.
Say I've got a DAL that multiple applications use to access the same data. The DAL defines its own classes and interfaces for dealing with that data, but should the applications using the DAL be working with those classes, or just the interfaces?
Another way; should it be:
List<Product> products = MyDAL.Repository.GetProducts();
or:
List<IProduct> products = MyDAL.Repository.GetProducts();
Is it good or bad that each application utilizing the DAL will have to create its own implementation details for Product?
Passing interfaces around, instead of classes, is one (very good) thing. Making your DAL classes private is a different (and not necessarily good) thing.
For instance, what if one of the applications that use your DAL want to change the behavior of Product slightly? How can you subclass or decorate your original class if it's private?
Say one of your apps is a web application, that needs to store a product's image as a url instead of a file path? Or to add caching, logging or something else on top of Product?
There are way too many questions here in order to determine the best way.
If these applications can reuse the additional functionality given by the classes in your DAL, then I'd say absolutely reuse them.
Taking "Product" for example. If the DAL has a definition of Product that is pretty close to or the same as the definition the applications need, then reuse is your best bet.
If the applications explicitly do NOT want the functionality given by the classes and instead want to provide their own implementation, then just use the interfaces.
Again, looking at "Product": if the applications have their own definition of Product with perhaps additional or just plain different properties and methods then they should implement the interface.
It's really a question of how the classes in question are going to be used.
Returning interfaces is better but then you will need GetProducts() to know about those implementations to properly query the data store. You might want to use an IOC framework for that.
i have a general question regarding naming convention.
if I separate the data and operations into two separate classes. one has the data elements (entity), the other class manipulates the entity class. what do we usually call that class that manipulates the entity class?
(the entity I am referring to has nothing to do with any kind of entity framework)
manager? controller? operator? manipulator?
thanks in advance
It depends on what kind of operations you're doing on those data contracts/entities. Here are some of my conventions. Let's use the example of a Fruit entity (and I'm not trying to imply these are all static methods, just pseudocode):
Repository: provides CRUD operations on a piece of fruit
FruitRepository.Save(Fruit item);
Manager: operations outside of simple CRUD.
InventoryManager.ShipFruit(Fruit[] items, string address);
Controller: reserved for use in the interface, as in Model-View-Controller. Makes interface or flow decisions how to display or operate on fruit.
FruitController.ShowDetails(string fruitId);
Processor: used on operations that are "batched" together. Often these are long-running or done offline.
FruitProcessor.RemoveSeeds(Fruit[] lotsOfFruit);
Manipulator: provides specific operations on a single entity or a collection of them.
FruitManipulator.PeelFruit(Fruit item);
Provider: provide more generalized or global operations.
FruitProvider.GetAllTypesOfFruit();
FruitProvider.IsInSeason(string fruitName);
Exporter: Convert some fruit into a format intended for file storage or perhaps transfer.
FruitExporter.Save(string spreadsheet);
Analyzer: Provides results about an individual piece of fruit or a quantity.
FruitAnalyzer.Weigh(Fruit[] items);
Service: exposes functionality in a loosely coupled or remotely accessible kind of way.
Assembler: Creates fruit by combining different data sources.
FruitAssembler.Combine(string speciesFile, string quantitiesFile);
Factory: responsible for creating/instantiating fruit.
FruitFactory.CreateApple(); // red delicious, McIntosh, etc
Builder: Provides a way to build up fruit by individual parts/properties.
FruitBuilder.AddSeeds(5); FruitBuilder.AddStem();
These are all somewhat loose. The main goal is to stay consistent within your own codebase and avoid conflicts with the technologies you're using-- ie. don't have a lot of Controller classes that aren't controllers if you're doing ASP.NET MVC.
I usually go with Manager.
Call it whatever you are comfortable with, just make sure you use that name consistently throughout your project. The closest thing we have is a Capability or a Receiver but even then these aren't quite what you're talking about.
However.
Do you have a specific reason for separating the data from the methods? Unless you talking about a class and its factory I'd be really surprised if this separation is truly warranted.
Let's reason like following:
If the logic uses only one entity, move it to the entity itself (See rich domain model vs. anemic domain model).
So most of these classes are those which implement logic which deal with more than one entity, hence represent a collaboration.
Such class should not be named according to their responsibility. A technical term such as manager, controller, manipulator, etc. can still be use for naming convention, but the important part is first part of the name.
Example:
Entities: Product and Customer
Collaboration between the two: PurchaseService <-- what's important is Purchase, not Service
I separate the data and operations into two separate classes.
Don’t. This flies in the face of object-oriented design.
One advantage that comes to my mind is, if you use Poco classes for Orm mapping, you can easily switch from one ORM to another, if both support Poco.
Having an ORM with no Poco support, e.g. mappings are done with attributes like the DataObjects.Net Orm, is not an issue for me, as also with Poco-supported Orms and theirs generated proxy entities, you have to be aware that entities are actually DAO objects bound to some context/session, e.g. serializing is a problem, etc..
POCO it's all about loose coupling and testability.
So when you are doing POCO you can test your Domain Model (if your're doing DDD for example) in isolation. You don't have to bother about how it is persisted. You don't need to stub contexts/sessions to test your domain.
Another advantage is that there is less leaky abstractions. Because persistance concerns are not pushed to domain layer. So you are enforcing the SRP principle.
The third advantage I can see is that doing POCO your Domain Model is more evolutive and flexible. You can add new features easier than if it was coupled to the persistance.
I use POCO when I'm doing DDD for example, but for some kind of application you don't need to do DDD (if you're doing small data based applications) so the concerns are not the same.
Hope this helps
None. Point. All advantages people like throwing around are advantages that are not important in the big scale of the picture. I rather prefer a strong base class for entity objects that actually holds a lot of integrated code (like throwing property change events when properties change) than writing all that stuff myself. Note that I DID write a (at that time commercially available) ORM for .NET before "LINQ" or "ObjectSpaces" even were existing. I've used O/R mappers like for 15 years now, and never found a case where POCO was really something that was worth the possible trouble.
That said, attributes MAY be bad for other reasons. I rather prefer the Fluent NHibernate approach these days - having started my own (now retired) mapper with attributes, then moved to XML based files.
The "POCO gets me nothing" theme mostly comes from the point that Entities ARE SIMPLY NOT NORMAL OBJECTS. They have a lot of additional functionality as well as limitations (like query speed etc.) that the user should please be aware of anyway. ORM's, despite LINQ, are not replacable anyway - noit if you start using their really interesting higher features. So, at the end you get POCO and still are suck with a base class and different semantics left and right.
I find that most proponents of POCO (as in: "must have", not "would be nice") normally have NOT thought their arguments to the real end. You get all kinds of pretty crappy thoughts, pretty much on the level of "stored procedures are faster than dynamic SQL" - stuff that simply does not hold true. Things like:
"I want to have them in cases where they do not need saving ot the database" (use a separate object pool, never commit),
"I may want to have my own functionality in a base class (the ORM should allos abstract entity classed without functionality, so put your OWN base class below the one of the ORM)
"I may want to replace the ORM with another one" (so never use any higher functionality, hope the ORM API is compatible and then you STILL may have to rewrite large parts).
In general POCO people also overlook the hugh amount of work that acutally is to make it RIGHT - with stuff like transactional object updates etc. there is a TON of code in the base class. Some of the .NET interfaces are horrific to implement on a POCO level, though a lot easier if you can tie into the ORM.
Take the post of Thomas Jaskula here:
POCO it's all about loose coupling and
testability.
That assumes you can test databinding without having it? Testability is mock framework stuff, and there are REALLY Powerful ones that can even "redirect" method calls.
So when you are doing POCO you can
test your Domain Model (if you're
doing DDD for example) in isolation.
You don't have to bother about how it
is persisted. You don't need to stub
contexts/sessions to test your domain.
Actually not true. Persistence should be part of any domain model test, as the domain model is there to be persisted. You can always test non-persistent scenarios by just not committing the changes, but a lot of the tests will involve persistence and the failure of that (i.e. invoices with invalid / missing data re not valid to be written to disc, for example).
Another advantage is that there is
less leaky abstractions. Because
persistance concerns are not pushed to
domain layer. So you are enforcing the
SRP principle.
Actually no. A proper Domain model will never have persistence methods in the entities. This is a crap ORM to start with (user.Save ()). OTOH the base class will to things like validation (IDataErrorInfo), handle property update events on persistent filed and in general save you a ton of time.
As I said before, some of the functionality you SHOULD have is really hard to implement with variables as data store - like the ability to put an entity into an update mode, do some changes, then roll them back. Not needed - tell that Microsoft who use that if available in their data grids (you can change some properties, then hit escape to roll back changes).
The third advantage I can see is that
doing POCO your Domain Model is more
evolutive and flexible. You can add
new features easier than if it was
coupled to the persistance.
Non-argument. You can not play around adding fields to a peristet class without handling the persistence, and you can add non-persistent features (methods) to a non-poco class the same as to a poco class.
In general, my non-POCO base class did the following:
Handle property updates and IDataErrorInfo - without the user writing a line of code for fields and items the ORM could handle.
Handle object status information (New, Updated etc.). This is IMHO intrinsic information that also is pretty often pushed down to the user interface. Note that this is not a "save" method, but simply an EntityStatus property.
And it contained a number of overridable methods that the entity could use to extend the behavior WITHOUT implementing a (public) interface - so the methods were really private to the entity. It also had some more internal properties like to get access to the "object manager" responsible for the entity, which also was the point to ask for other entities (submit queries), which sometimes was needed.
POCO support in an ORM is all about separation of concerns, following the Single Responsibility Principle. With POCO support, an ORM can talk directly to a domain model without the need to "muddy" the domain with data-access specific code. This ensures the domain model is designed to solve only domain-related problems and not data-access problems.
Aside from this, POCO support can make it easier to test the behaviour of objects in isolation, without the need for a database, mapping information, or even references to the ORM assemblies. The ability to have "stand-alone" objects can make development significantly easier, because the objects are simple to instantiate and easy to predict.
Additionally, because POCO objects are not tied to a data-source, you can treat them the same, regardless of whether they have been loaded from your primary database, an alternative database, a flat file, or any other process. Although this may not seem immediately beneficial, treating your objects the same regardless of source can make behaviour easy to predict and to work with.
I chose NHibernate for my most recent ORM because of the support for POCO objects, something it handles very well. It suits the Domain-Driven Design approach the project follows and has enabled great separation between the database and the domain.
Being able to switch ORM tools is not a real argument for POCO support. Although your classes may not have any direct dependencies on the ORM, their behaviour and shape will be restricted by the ORM tool and the database it is mapping to. Changing your ORM is as significant a change as changing your database provider. There will always be features in one ORM that are not available in another and your domain classes will reflect the availability or absence of features.
In NHibernate, you are required to mark all public or protected class members as virtual to enable support for lazy-loading. This restriction, though not significantly changing my domain layer, has had an impact on its design.