I've developed a winforms app for data entry and I want to know if there is a better way to handle the dirty/new tracking and committing changes to the database (SQL Server).
My overall strategy is this:
I have several business objects and most of them have a corresponding data entry form.
I have implemented INotifyPropertyChanged on these business objects.
I also created an interface to handle Dirty tracking. It looks like this:
public interface IRecord
{
bool NewRecord {get;set;}
bool Dirty {get;set;}
int SaveRecord();
}
Each business object (e.g. Region) that has a corresponding data entry form gets a derived type:
public class Region
{
public string RegionName {get;set;}
public string List<Study> Studies {get;set;}
}
public class RegionRecord: Region, IRecord
{
public bool Dirty {get;set;}
public bool NewRecord {get;set;}
int ID { get;set; } // ID is here so it is not in the base class
public new List<StudyRecord> Studies {get;set;}
public int SaveRecord()
{
if (NewRecord)
// stored procedure insert, return 1 on error
else if (Dirty)
// stored procedure update, return 1 on error
return 0;
}
}
I did this to remove the "database" fields like ID from the basic business objects. I was trying to make a layer between the business object and and object that would interact with the database and I thought the ID wasn't necessary for the business class to have.
As defined above, some business objects have a property that is another List. In this case, a Region can have many Study objects. On the data entry form, I also want to track changes to each of these Study objects (usually via a DataGridView). You can see that I've hidden the base Studies property and replaced it with a List<StudyRecord> version.
Each data entry form uses a Binding Source with a BindingList (e.g. BindingList<RegionRecord>) as its data source. This way I can detect changes to the properties of these objects and then set the Dirty property of each object.
I also build reports with OpenXML wordprocessing and use the base objects to populate these reports. My issue is that the properties that are lists are empty because they are hidden. It feels wrong to keep both the base and derived version of the properties in sync.
I feel like I've painted myself into a corner with this implementation and there must be a better way.
Am I going about this all wrong?
Related
I have an idea for a web app where I will want the user to create their own database through a web application, with their own table names and field types.
I thought about creating a database structure using Object Oriented Programming so that a pre-made database will support all kinds of Entities with custom properties. Something like this:
CustomType
{
public long TypeId {get;set;}
public string ActiveType {get;set;}
}
CustomProperty
{
public int wholeNumber {get;set;}
public string text {get;set;}
public bool boolean {get;set;}
public decimal dec {get;set;}
//Choosen Id of the type to work with
public long TypeId {get;set;}
public bool wholeNumber_ACTIVE {get;set;}
public bool text_ACTIVE {get;set;}
public bool boolean_ACTIVE {get;set;}
public bool dec_ACTIVE {get;set;}
}
CustomEntity
{
public string TableName {get;set;}
public CustomProperty Prop01 {get;set;}
public CustomProperty Prop02 {get;set;}
public CustomProperty Prop03 {get;set;}
public CustomProperty Prop04 {get;set;}
public CustomProperty Prop05 {get;set;}
}
The idea behind this is to let the user decide what they want their database to store, on a pre-made database for them to work with, without having to create it during runtime since this is a web app.
I believe I can manage it like this for them to store whatever they need, but I'm also thinking about the following issues:
How will I manage relationships when the user needs to link tables with Ids and foreign keys.
(I though about managing a public long ForeignId {get;set;} and just store the Id they need to associate).
How will I manage queries since tables will have CodeNames and each will have a different meaning for each person that sets it up.
(I though about, renaming the table during Runtime, but I'm afraid of errors and DB corruption).
Also thought about sending direct queries to create the database according to user's need, but then again non-experienced users can really mess up here or find it hard to manage.
How can I manage migration or DB changes with code instead of the use of PowerShell console.
If we have multiple users each with a unique database, but the same web app how can we manage webconfigs to work with this idea.
I know there's a lot of questions here, I'm looking for the best way to achieve this, having multiple users own their small web app through the internet using MVC pattern and lots of options through a browser.
I would recommend an Entity Attribute Value (EAV) pattern as a solution. With the EAV pattern, rather than creating new tables with new columns for every custom property you wish to store, you store those properties in rows. For example. Instead of every custom table being defined like this:
You define them like this instead:
This allows for flexible creation of multiple entities with multiple properties. The classes in your business logic will then be Entity classes with a collection of Property objects.
In case you haven’t spotted the trade-offs already, the limitation of using the EAV model is the inability to specify field types (int, varchar, decimal etc.), infact, all your property values will be stored as a single type (usually strings).
There are a number of ways to address this. Some handle all the validation etc. in the business logic, other create Field tables per type, so based on my example, rather than having just one EntityFields table, you’ll have multiple, separated by type.
I'm using code-first with entity framework for modeling and creating my db.
I'm just wondering - when I return objects to the user - I don't want to return data sensitive fields like id.
Should I add an atttibute like [DoNotReturn] combined with a filter to remove this fields when returning to the user, or should I just make a whole new class which doesn't contain these fields?
Example:
public class UserAccount
{
//Option 1 - Hide fields
[DoNotReturn]
public int Id { get; set; }
[Index(IsUnique = true)]
[MaxLength(50)]
public string Username { get; set; }
public decimal Cash { get; set; }
[DoNotReturn]
public Account Account { get; set; } //Contains Email / Password
//Option 2 - return a `safe` version
public User UserAccountToFriendly() {
return new FriendlyUser(this.Username, this.Cash);
}
}
Keep your database model separate from your view model that's the approach I have taken and doing it for a long time. it will give you a good separation. Once you start dealing with ViewModel then you can use a library like Automapper or custom mapping classes to convert ViewModel to database model or vice-versa. I hope it helps
Never use your database models as result for end-users and keep it separate from Presentation/Application layer.
There are so many problems that you will encounter:
disclosure of sensitive data (you've mentioned about);
performance issues and waste of RAM and CPU (for instance, you have Order entity with dozens of properties, it would be better to load only those properties that is required instead all);
problems with serialization (with enabled lazy-loading, for instance MVC could try to serialize whole object with navigation properties... );
etc...
I'd like to recommend the following:
return original database entity from Repository layer if necessary, but don't forget to cast it on Presentation layer to another completely brand new xxxModel, xxxViewModel, xxxResponse, etc;
return xxxView from Repository layer if you want to achieve best optimizations, but don't forget to cast it on Presentation layer to brand new object. (any changes on one layer shouldn't affect others, especially end-users);
Scenario: I am writing a program that handles report generation.
I have the report stored in a database, mapped to an EF model. There are some non-database fields (i.e. some fields are auto-calculated based on other fields that ARE in the db). Would it make sense to have one class that solely maps to the DB, and another class that takes that information and additionally has the other calculating fields?
i.e. a sample class to interact with the codefirst database would be
public class Report{
public int CategoryOneSeverity {get; set;}
public int CategoryTwoSeverity {get;set;}
public string Title {get;set;}
}
Would it make sense to make another class, like:
public class ReportModel{
public int CategoryOneSeverity;
public int CategoryTwoSeverity;
public string Title;
public int RiskRating{
get{ return CategoryOneSeverity + CategoryTwoSeverity; }
}
}
Or should the RiskRating property be in the EF model.
Yes, I absolutely believe you should have different classes to model your domain than your DB. Unless your application is extremely trivial, if you try to map your domain objects directly, you invariably have to change them to match what you need your data structure to be, and possibly expose things you don't want to expose. Think of it as a violation of the Single Responsibility principle; your class has two reasons to change if you make it your domain object and map it directly. One is in response to changing business requirements, the other is in response to changing data storage schema.
"Would it make sense to have one class that solely maps to the DB, and
another class that takes that information and additionally has the
other calculating fields?"
Most likely yes. Usually I would create a new class suffixed with "ViewModel" such as HumanResourcesReportViewModel if my entity class was HumanResourcesReport.
There's lots of variations on how to use ViewModels, and we could get into a pedantic debate about terminology, but conceptually, take your entity and create a new class with that data plus whatever additional information you need to process the report. In this case the report generation is in a way the View of the MVC model, so I don't think it's offensive to call the class holding the data a ViewModel.
Are you using Code First or DB First?
You can have auto calculated fields in your model, which are not mapped to fields in the database.
It also depends on your architecture. If you're using DB first, refreshing your EF model would update your EF classes, losing your mapped fields. In the DB-First scenario, an alternative would be to use the EF model class as your base class and inherit from it for your report class.
public class ReportModel
{
public int CategoryOneSeverity;
public int CategoryTwoSeverity;
public string Title;
}
public class ReportClass : ReportModel
{
public int RiskRating
{
get { return CategoryOneSeverity + CategoryTwoSeverity; }
}
}
We’re developing an N-tier architecture application using WCF between client (presentation) and server (data/business layer). To be honest I can’t find any real examples/info of how to expose efficiently calculated data through WCF.
For describing my problem say we have ATM machines which have lots of transactions. So we have an 1-N relation between ATM Class and Transaction class. ATM Class has properties like Location, ModelNo, Description, InstallDate and the Transaction records have info like Amount, DateTime, CustomerInfo, TicketPaperLength, ElectricityUsed
Exposing these classes through WCF is not the issue. Problem is that we have lots of calculated fields for ATM that are based on the underlying Transaction table. For example, the client application uses reports based on the calculated data of ATM. Examples of calculated data of ATM could be: AverageTicketPaperLength, AverageAmount, DeviationAmount, AverageElectricity, etc, etc. There are lots and lots of these calculated data. The calculations should take place on the server and not on the client-side. If these report definitions were all fixed it wouldn’t be that big a problem: we could create separate services/Poco’s, for the reports. Put the calculations in a business layer and fill the Poco as needed. But the client application must have the ability to make reports filtered on whatever set of calculated properties of ATM and return as data another set of (calculated) properties.
I could create a Poco with about 500 calculated properties where there for each single report only may be 10 properties would be used. But of course we don’t want all 500 calculations executed every time for each and every entity.
So in general I’m wondering how one would expose calculated data of an entity through e.g. WCF. Almost all examples I see explaining Entity Framework, Poco and WCF only deal with the persistent fields of the entity and that is pretty straight-forward.
Do not expose entities through WCF, create some DTOs.
For example:
In wcf layer -
DtoInfoForReport1 GetInfoForReport1(long atmId) { ... call BL here... }
DtoInfoForReport2 GetInfoForReport2(long atmId) { ... call BL here... }
In data layer -
AtmEntity
{
long Id {get;set;}
... some properties ...
HashSet<Transaction> AtmTransactions {get;set;}
}
Transfer objects -
DtoInfoForReport1
{
long AtmId {get;set;}
XXX SomeCalculatedValue {get;set;}
}
In BL -
DtoInfoForReport1 CreateInfoForReport1(long atmId)
{
var atm = YYY.GetEntity<AtmEntity>(atmId);
return new DtoInfoForReport1
{
AtmId = atmId,
SomeCalculatedValue = DoSomeCalculationOverMyAtmWithItsTransactions(atm),
};
}
Hope I got your question right. Otherwise comment.
Edit based on comment:
Than I would suggest DTOs like this:
[DataContract]
public DtoRequestedCalculations
{
[DataMember]
public long AtmId {get;set;}
[DataMember]
public List<DtoRequestedCalculationEntry> Calculations {get;set;}
}
[DataContract]
public DtoRequestedCalculationEntry
{
[DataMember]
public string / long / Guid / XXX ParameterIdentifier {get;set;}
[DataMember]
public double/ DtoParameterCalculatedValueBase {get;set;}
}
Now if your calculated value is always double it's basically done. If your values may be or different types you will need some base class - DtoParameterCalculatedValueBase, which is sth like this:
[DataContract]
[KnownType(typeof(DtoParameterDoubleCalculatedValue))]
[KnownType(typeof(DtoParameterXXXCalculatedValue))]
public DtoParameterCalculatedValueBase
{
...whatever common part there may be or nth...
}
public DtoParameterDoubleCalculatedValue : DtoParameterCalculatedValueBase
{
[DataMember]
public double Value {get;set;}
}
public DtoParameterXXXCalculatedValue : DtoParameterCalculatedValueBase
{
[DataMember]
public XXX Value {get;set;}
}
Note the KnownType attribute - it tells WCF what types may come in place of base class. You will have to provide this attribute for each inherited type (or use DataContractResolver, which is already another story).
Than in WCF:
DtoRequestedCalculations GetCalculatedValuesForAtm(long atmId, List<long / string/ Guid / XXX> valueIdentifiers);
I posted this:
Object depending on another object - when to load its properties
After reading this and researching some more I've come to realize that lazy loading would be ideal for my situation. However, some background info. The class I posted Department gets its properties data from a database (Sql Server 2k5). Its your typical setup:
Front End->BOL->DAL
So I wanted to keep my Department class basically just hold information pertinent to it. I did not / do not want to expose my business object class to this class. So how would I fill my department object without having to make a call to my business object layer.
I think code would help:
public class Employee
{
public int ID { get; set; }
public string FirstName { get; set; }
public Department d { get; set; } //todo: implement lazy load in get.
public Employee(int ID, string FirstName)
{
this.ID = ID;
this.FirstName = FirstName;
}
}
class Department
{
public string DepartmentID { get; set;}
public string CostCenter { get; set; }
public bool hasManager { get; set; }
//more code
//constructor for department
}
And then say my BOL is used to call my DAL to do something like:
//some bol class that simply calls the data access layer class
DAL.GetDepartments("SomeDepartment");
And the DAL does some code to return some data to fill a department object...
However if I lazy load my get property for the department then it would need to know about my BOL so I'd have to add a using statement to include that class as well. That can't be the right way to do this...Any tips or pointers?
If you really want to do lazy loading of components, take a look at some ORM like NHibernate, EF4, etc.
To create your own lazy load solution is not a trivial task and it involves some complex concepts like dynamic proxing, maybe IL generation and other advanced topics.
Anyway, regarding your question on dependencies and use or not use your BOL from your DAL, check if you can apply dependency inversion.
In my current project, almost all my domain business objects have at least two constructors. (1) takes an id for that object, so that it can self-construct, pulling data from the db itself. The other constructor (2) takes the data entity required for that domain object to function. In this way, I can eager load or lazy load, whichever is best at the time.
So, if someone calls your Employee business object Department property, the property could check if it already had that. If not, you could get the DepartmentID from the employee data, instantiate a department, and (store if and) return it. In this way, it's easy to get whatever domain objects you need.
But you still want the eager-loading option there. Let's say you have a Department business object, with an Employees property to return a List(Of Employee). The Employees property would directly get the data from the database, and instantiate each Employee using the data constructor. So you'd have your cool Employee business objects, 80 of them, but with just one data query.
And then to take it up a notch, your objects could accept multi-layer data. For example, you can construct an Employee with an Employee entity from the DAL, that also includes the data for the Department, Supervisor, etc. The "ID" constructor could also get this data from the get-go.
Eventually you'll have some decisions to make about what to pre-load and what to lazy-load. For example, maybe 90% of the time when you construct an Employee, you also need the Department data. Then you might decide to just get the Department data when you construct an employee using the EmployeeID constructor. But the, say, supervisor data is only used 8% of the time. You might decide to always lazy load that.
The simple answer is, obviously, it can't be done. Either you're going to have to let your business objects know about your data layer (make sure to use DI if you go this route) or you can't lazy load from within the business class. A happy medium might be to create a service class that knows about both DAL workings and your business layer and knows how to load and create business objects. Of course, as Juanma stated, ORMs were built to do this kind of thing.