How/Where to apply business rules to POCO objects? - c#

Let's say I have a POCO with the following:
[DataMember]
public Nullable<int> MetricId
{
get { return _metricId; }
set
{
if (_metricId != value)
{
_metricId = value;
OnPropertyChanged("MetricId");
}
}
}
private Nullable<int> _metricId;
I want to validate that the MetricId is strictly greater than 0
Obivously, if I put this rule as a data annotation in this class it will be overwritten the next time I regen the poco. Where do I put this logic?
Thanks!

I seem to remember the suggestion being to utilize partial classes and roll a partial class that implemented the logic you didn't want to be overwritten.

After reading the comments and responses, it seems that creating another class is fine, but by making it partial, it ties my business logic directly to the Entity Framework and the generated POCO code. This is worrisome because as EF4 changes into EF5 and the T4 template changes to the T5 template what will happen to my code? Plus I just don't feel comfortable using partial classes as normal classes.
Instead, and someone can still provide a better answer (please do), I think creating a framework independent object (one not tied to EF) is better. Then I can map it to a generic business object. Something like:
static Customer Map(CustomerPOCO poco)
{
return new Customer
{
CustomerId = poco.CustomerId
...
...
};
}

It's not clean using partial classes lets say you have product abstract lass and derived classes online product and store product. Both inherit price property but price is different. And let's say business logic may be different too. Now you got two additional classes that you don't really need. In larger system, it multiplies.

Related

Using interfaces in models with SQLite

Let's say I have an interface like this:
public interface IUser
{
int Id { get; }
string Name { get; }
List<IMonthlyBudget> MonthlyBudget { get; }
}
and then I have a model that implements this:
public class User : IUser
{
public int Id { get; set; }
public string Name { get; set; }
public List<IMonthlyBudget> MonthlyBudget { get; set; }
}
and here I have the IMonthlyBudget:
public interface IMonthlyBudget
{
int Id { get; }
float MonthlyMax { get; }
float CurrentSpending { get; }
float MonthlyIncome { get; }
}
Now I have my models. But the issue comes with using SQLite. SQLite can't understand what is the real implementation of IMonthlyBudget. I understand why, but I really don't want remove the interface and expose the real implementation to all the clients that use these models. In my project structure I have a Core project that has all the model interfaces, and the model implementation are in a data access project.
Is there something wrong with how I'm approaching this problem? I assume i'm not the first one to run into a issue like this. Isn't it completely normal practice to keep model interfaces (what repositories etc then use as their return types, parameters and stuff like that) and implement the actual concrete models in a data access project?
And can someone explain why I can't do this:
public class User : IUser
{
public int Id { get; set; }
public string Name { get; set; }
public List<MonthlyBudget> MonthlyBudget { get; set; }
}
MonthlyBudget implements IMonthlyBudget, shouldn't it be completely fine to use the concrete model as the type instead of the the interface when the concrete model actually implements the interface?
A few questions here, so I'll break it down into sections:
Use of Interfaces
It is definitely good practice to interface classes that perform operations. For example, you may have a data service (i.e. data access layer) interface that allows you to do operations to read and modify data in your persistent store. However, you may have several implementations of that data service. One implementation may save to the file system, another to a DBMS, another is a mock for unit testing, etc.
However, in many cases you do not need to interface your model classes. If you're using an anemic business object approach (as opposed to rich business objects), then model classes in general should just be containers for data, or Plain Old CLR Objects (POCO). Meaning these objects don't have any real functionality to speak of and they don't reference any special libraries or classes. The only "functionality" I would put in a POCO is one that is dependent only upon itself. For example, if you have a User object that has a FirstName and LastName property, you could create a read-only property called FullName that returns a concatenation of the two.
POCOs are agnostic as to how they are populated and therefore can be utilized in any implementation of your data service.
This should be your default direction when using an anemic business object approach, but there is at least one exception I can think of where you may want to interface your models. You may want to support for example a SQLite data service, and a Realm (NoSQL) data service. Realm objects happen to require your models to derive from RealmObject. So, if you wanted to switch your data access layer between SQLite and Realm then you would have to interface your models as you are doing. I'm just using Realm as an example, but this would also hold true if you wanted to utilize your models across other platforms, like creating an observable base class in a UWP app for example.
The key litmus test to determining whether you should create interfaces for your models is to ask yourself this question:
"Will I need to consume these models in various consumers and will those consumers require me to define a specific base class for my models to work properly in those consumers?"
If the answer to this is "yes", then you should make interfaces for your models. If the answer is "no", then creating model interfaces is extraneous work and you can forego it and let your data service implementations deal with the specifics of their underlying data stores.
SQLite Issue
Whether you continue to use model interfaces or not, you should still have a data access implementation for SQLite which knows that it's dealing with SQLite-specific models and then you can do all your CRUD operations directly on those specific implementations of your model. Then since you're referring to a specific model implementation, SQLite should work as usual.
Type Compatibility
To answer your final question the type system does not see this...
List<IMonthlyBudget> MonthlyBudget
as being type-compatible with this...
List<MonthlyBudget> MonthlyBudget
In our minds it seems like if I have a list of apples, then it should be type-compatible with a list of fruit. The compiler sees an apple as a type of fruit, but not a list of apples as a type of a list of fruit. So you can't cast between them like this...
List<IMonthlyBudget> myMonthlyBudget = (List<IMonthlyBudget>) new List<MonthlyBudget>();
but you CAN add a MonthlyBudget object to a list of IMonthlyBudget objects like this...
List<IMonthlyBudget> myMonthlyBudget = new List<IMonthlyBudget>();
myMonthlyBudget.Add(new MonthlyBudget());
Also you can use the LINQ .Cast() method if you want to cast an entire list at once.
The reason behind this has to do with type variance. There's a good article on it here that can shed some light as to why:
Covariance and Contravariance
I hope that helps! :-)

Does it make sense to use MetadataType to enforce validations in case of Code First?

I seem to understand the reason behind taking help of MetadataTypeAttribute to Add Validation to the Model in case of Database First as we want to avoid the changes being overwritten when the model is generated from the database next time.
I've noticed few people defining validation using MetadataType even when they're using Code First approach and there is no chance of their Entity Classes being overwritten by some kind of auto-generation of code.
Does it make any sense to not apply these DataAnnotations on the actual Entity class directly and instead, separate these into partial class definitions and then link using MetadataType, even when using Code First approach to define Entity Model?
public class MyEntity
{
[Required]
public string Name { get; set;}
}
vs
public partial class MyEntity
{
public string Name { get; set;}
}
[MetadataType(typeof(MyEntityMetadata))]
public partial class MyEntity
{
}
public class MyEntityMetadata
{
[Required]
public string Name { get; set;}
}
Does it make any sense to not apply these DataAnnotations on the actual Entity class directly and instead, separate these into partial class definitions and then link using MetadataType, even when using Code First approach to define Entity Model?
In most of the cases it doesn't make sense because it involves unnecessary and redundant code duplication just to associate some attributes with the properties.
It doesn't make sense if the entity class model is created by you with code.
It also doesn't make sense if it's created with some custom code generation you have control over (like T4 template) because you can customize the generation itself.
The only case when it makes sense is when you have no control over the entity class code (for instance, the class coming from 3rd party library). In such case, you can use AssociatedMetadataTypeTypeDescriptionProvider class to associate metadata with the 3rd party class.
For instance, let say the following class is coming from another library with no source code:
public sealed class ExternalEntity
{
public string Name { get; set;}
}
Then you can define the metadata class:
public class ExternalEntityMetadata
{
[Required]
public string Name { get; set;}
}
and associate it with the ExternalEntity using TypeDescriptor.AddProvider method once (during the application startup or something):
TypeDescriptor.AddProvider(new AssociatedMetadataTypeTypeDescriptionProvider(
typeof(ExternalEntity), typeof(ExternalEntityMetadata),
typeof(ExternalEntity));
It really makes sense to create a class and use it many times. In code first approach you need data validation which is possible to implement by using data annotations and When you have many props with the same features your life will be easier by doing so. It is not just about overwriting and in this case it has some other reasons. Hope to understand your question well and my answer is appropriate.
I think the questions is where is the difference between data annotations on model and on code first.
So at first you have data validation
this is setting up attributes on your code first model
and this sets up configuration of database columns and this will set the size and restrictions on your data model. (this once populated usually does not change without migrating data.)
Model validation
model validation is your model you are binding your form into.
This model would contain more information for your UI.
I don't know why you are trying to employ a Database first technique to a more complete, say, Code first since you can create ViewModels to meet your purpose. Also not all the data annotations are supported in Entity Framework.
MetadataType limitations
It cannot be applied to a property and can be only be applied to a
single class for each class type.
This attribute cannot be inherited, so you cannot customize it.
On the other side, this attribute can be applied to partial class
which is the main purpose of this attribute.
This attribute will be respected by ASP.NET MVC but will not be read
by Entity Framework.
Cons of using MetadataType
you have to use ViewBag, ViewData or something else to pass
additional information to the view
Your design is less testable since it relies on a static object
mechanism.
It is also not required and someone could omit it without breaking
anything.
It also means that you are splitting your model class into 3 files.
One generated, one of yours and one with attributes.
If you want to add attributes to existing properties in a class (partially) :
This may work or be ignored by the EF, test it:
public partial class YourModelClass
{
public string YourProperty{get;set;}
}
//Your Partial Class
[MetadataType(typeof(YourModelClassMetaData))]
public partial class YourModelClass
{
}
//The class that adds your attributes
public class YourModelClassMetaData
{
[Required]
public object YourProperty{get;set;}
}

Property templates (multiple properties in multiple classes with same\similar code)?

In a c# project (and MVC project to be exact) I have some partial classes. Some of those classes have DateTime properties and I need to expand their functionality a bit. The thing I'm doing is adding a helper property to the class that let's me alter the time separately.
So if I have a SomeDate property in the generated partial class I would add this to the not generated part of the class:
DateTime? _someDateTime;
[DataType(DataType.Time)]
public DateTime? SomeDateTime
{
get
{
if (_someDateTime == null)
{
if (SomeDate.HasValue)
{
_someDateTime = SomeDate;
}
}
return _someDateTime;
}
set
{
_someDateTime = value;
}
}
With some extra code elsewhere I can easily enough manipulate the time of SomeDate property in the way that I want.
Now I would need to have similar helper properties related to other properties in this and other partial classes. Is there some alternative to copy-pasting the code above?
P.S. After writing this question, some ideas started forming in my mind. They involve inheriting from DateTime and writing a new attribute that does nothing but provide name of the related DateTime property.
There are multiple approaches to do that:
As you already stated: create a wrapper class which provides the functionality you need which manipulates the state of the object
Similar to the first one, create an extension method which then manipulates a DateTime instance
Create a "Helper" class, which is able to change a DateTime instance
It really depends on the context of your code as well on the requirements, constraints, etc.

C# DDD Populate Immutable Objects

I have a immutable Customer class in my domain assembly. It contains the following GET properties : id, firstname and lastname. I have a CustomerRepository class in my persistence assembly. In turn, this CustomerRepository class should populate and return a Customer object using a remote web-serivce.
My Customer class contains no setter properties and it contains a private constructor. The reason - I dont want the UI developer to get the wrong idea - He should not be able to create or change a Customer object.
My question: How do I get my CustomerRepository to populate my Customer object. Reflection? Or should I sacrifice my design and enable a public constructor for constructing the customer object?
I sympathize with the desire to reduce the surface of the API and not mislead callers, but I still recommend adding a public constructor. The fact that there are no setters and no public SaveCustomer method should make it clear enough that the customer is immutable.
If you really don't want a public constructor, consider whether you really need separate domain and persistence assemblies: there are good reasons to split related code into two assemblies, but it shouldn't be the default position and shouldn't replace namespaces as the primary way of organizing code (Patrick Smacchia has written a few great articles explaining why).
If you combine them into a single assembly, you can just make the constructor internal and be done with it. (As another respondent mentioned, InternalsVisibleTo is a viable alternative - but it's really just a hack: your classes and design goals are telling you these should be in a single assembly.)
You might want to declare an internal constructor, with your three properties as parameters. If your CustomerRepository does not live in the same assembly as your Customer class, then you can make your internals visible by using the following attribute:
[assembly: InternalsVisibleTo ("CustomerAssembly, PublicKey=...")]
in the Customer assembly.
Edit: By the way, I would not recommend using reflection if you need to create lots of objects, because doing so will be orders of magnitude slower than direct calls to constructors. If you really have to go that route, I'd recommend adding a static factory method which you can call through reflection in order to get an efficient allocator.
For instance:
class Customer
{
private Customer(...) { ... }
private static ICustomerFactory GetCustomerFactory()
{
return new CustomerFactory();
}
private class CustomerFactory : ICustomerFactory
{
Customer CreateCustomer(...) { return new Customer(...); }
}
}
public interface ICustomerFactory
{
Customer CreateCustomer(...);
}
Use reflection to call Customer.GetCustomerFactory and from then on, you'll have a fast and efficient way of creating your Customers.

How should I design my object model so that my DAL can populate read-only fields?

In order to separate concerns, on my current project, I've decided to completely separate my DAL and BLL/Business objects in separate assemblies. I would like to keep my business objects as simple structures without any logic to keep things extremely simple. I would like if I could keep my Business Logic separate from my DAL also. So my application will tell my DAL to load my objects, my DAL will run off to the database and get the data, populate the object with the data and then pass it back to my BLL.
Question - how can I have my DAL in a separate assembly and push data into the read only fields?
If I set the getter as protected then inherited objects can access it which isn't really what I want as I'd be returning the inherited object types, not the original object types.
If I set the getter as internal, then my DAL must reside in the same assembly as my BLL which I don't want.
If I set the getter as public, then anyone can read/write to it when it should be read only.
Edit: I note that I can have a return type of ObjectBase but actually be returning an object or collection of objects that are derived form ObjectBase so to the outside world (outside my DAL) the properties would be read-only, but my derived types (only accessible inside my DAL) the properties are actually read/write.
You can set the read only property via a constructor.
This is a situation without a silver-bullet; the simplest options are limited or don't meet your requirements and the thorough solutions either begin to have smells or begin to veer away from simplicity.
Perhaps the simplest option is one that I haven't seen mentioned here: keeping the fields / properties private and passing them as out / ByRef parameters to the DAL. While it wouldn't work for large numbers of fields it would be simple for a small number.
(I haven't tested it, but I think it's worth exploring).
public class MyObject()
{
private int _Id;
public int Id { get { return _Id; } } // Read-only
public string Name { get; set; }
// This method is essentially a more descriptive constructor, using the repository pattern for seperation of Domain and Persistance
public static MyObject GetObjectFromRepo(IRepository repo)
{
MyObject result = new MyObject();
return repo.BuildObject(result, out _Id);
}
}
public class MyRepo : IRepository
{
public MyObject BuildObject(MyObject objectShell, out int id)
{
string objectName;
int objectId;
// Retrieve the Name and Value properties
objectName = "Name from Database";
objectId = 42;
//
objectShell.Name = objectName;
Console.WriteLine(objectShell.Id); // <-- 0, as it hasn't been set yet
id = objectId; // Setting this out parameter indirectly updates the value in the resulting object
Console.WriteLine(objectShell.Id); // <-- Should now be 42
}
}
It's also worth noting that trying to keep your domain / business objects to the bare-minimum can involve more than you think. If you intend to databind to them then you'll need to implement IPropertyNotifyChanged, which prevents you from using automatically-implemented properties. You should be able to keep it fairly clean, but you will have to make some sacrifices for basic functionality.
This keeps your SoC model nicely, it doesn't add in too much complexity, it prevents writing to read-only fields and you could use a very similar model for serialization concerns. Your read-only fields can still be written to by your DAL, as could your serializer if used in a similar fashion - it means that conscious effort must be taken by a developer to write to a read-only field which prevents unintentional misuse.
Model Project
namespace Model
{
public class DataObject
{
public int id { get; protected set; }
public string name { get; set; }
}
}
Data Project
namespace Data
{
class DALDataObject : DataObject
{
public DALDataObject(int id, string name)
{
this.id = id;
this.name = name;
}
}
public class Connector
{
public static DataObject LoadDataObject(int objectId)
{
return new DALDataObject(objectId, string.Format("Dummy object {0}", objectId));
}
public static IEnumerable<DataObject> LoadDataObjects(int startRange, int endRange)
{
var list = new List<DataObject>();
for (var i = startRange; i < endRange; i++)
list.Add(new DALDataObject(i, string.Format("Dummy object {0}", i)));
return list;
}
}
}
How about just live with it?
Implement with those guidelines, but don't add such a hard constraint in your model. Lets say you do so, but then come another req where you need to serialize it or do something else, and then you are tied with it.
As you said in other comment, you want pieces that are interchangeable ... so, basically you don't want something that's tied into specific relations.
Update 1: Perhaps "just live with it" was too simplistic, but I still have to stress out that you shouldn't go too deep into these things. Using simple guidelines, keeping your code clean and SOLID its the best you can do at the beginning. It won't get in the way of progress while refactoring when everything is more settled isn't hard.
Make no mistake, I am not at all a person that goes writing code without any thinking on it. But, I have gone with such approaches and only in a handful cases they pay off --- without any indication that you wouldn't have a similar result by going simple and evolving it.
IMHO this one does not fit into important architecture concerns that need to be addressed at the very beginning.
Pre-emptive follow up: beware if you can't trust your team into following simple guidelines. Also make sure to begin with some structure, pick a couple scenarios that set a structure in with real stuff, the team will know their way much better when there is something simple there.
In my opinion, the best way to handle this is to have the business objects and the DAL in the same assembly separated by namespace. This separates the concerns logically and allows you to use internal setters. I can't think of any benefit to separating them into their own assemblies because one is useless without the other.

Categories