Auditing rows added to Azure table storage - c#

I have created the following class which I believe gives me some good auditing capabilities for data rows in certain tables that require it. Here is the class I am using:
public class AuditableTableServiceEntity : TableServiceEntity
{
protected AuditableTableServiceEntity()
: base()
{
}
protected AuditableTableServiceEntity(string pk, string rk)
: base(pk, rk)
{
}
#region CreatedBy and ModifiedBy
private string _CreatedBy;
[DisplayName("Created By")]
public string CreatedBy
{
get { return _CreatedBy; }
set { _CreatedBy = value; Created = DateTime.Now; }
}
[DisplayName("Created")]
public DateTime? Created { get; set; }
private string _ModifiedBy;
[DisplayName("Modified By")]
public string ModifiedBy
{
get { return _ModifiedBy; }
set { _ModifiedBy = value; Modified = DateTime.Now; }
}
[DisplayName("Modified")]
public DateTime? Modified { get; set; }
#endregion
}
Can anyone out there suggest any additional changes that I might consider for this class. I believe it is okay but as I need to implement this for many classes I would like to hear if anyone can suggest any changes or additions.

private string _ModifiedBy;
[DisplayName("Modified By")]
public string ModifiedBy
{
get { return _ModifiedBy; }
set { _ModifiedBy = value; Modified = DateTime.Now; }
}
will cause a stack overflow: setting the value of a property in a setter calls the setter, which sets the value of the property, which calls the setter, and so on.
You could set the properties in a constructor, but then things break if an instance is serialized and deserialized: when you deserialize it, the public parameterless constructor is called, and the setter is called... which sets the property to the date and time that the object was deserialized, not the stored value.
A better pattern might be to create another table for auditable events. This might look something like this:
public class Audit
{
public string ModifiedBy { get; set; }
public DateTime DateModified { get; set; }
public Type ObjectType { get; set; }
public string Field { get; set; }
public object OldValue { get; set; }
public object NewValue { get; set; }
public static void Record(string user, Type objectType, object oldValue, object newValue)
{
Audit newEvent = new Audit
{
ModifiedBy = user,
DateModified = DateTime.UtcNow, // UtcNow avoids timezone issues
ObjectType = objectType,
OldValue = oldValue,
NewValue = newValue
};
Save(newEvent); // implement according to your particular storage classes
}
}
Then, whenever you make changes to an object you want to audit, call Audit.Record() like so:
public class SomeKindOfAuditableEntity
{
private string _importantFieldToTrack;
public string ImportantFieldToTrack
{
get { return _importantFieldToTrack; }
set
{
Audit.Record(GetCurrentUser(), this.GetType(), _importantFieldToTrack, value);
_importantFieldToTrack = value;
}
}
}
This way you store a log of all changes that happen to all "interesting" properties of your tables. This has a few other advantages:
you see the old and new values of each change
the audit log is stored in a different place from the data itself, separating concerns
you don't need to have a base class for your data classes
the audit for old changes is kept around so you can go back through the entire log of the object's changes
The principal disadvantage is that you need to add code to each setter for each property you're interested in. There are ways to mitigate this with attributes and reflection and aspect-oriented programming -- see for instance Spring's implementation here: http://www.springframework.net/doc-latest/reference/html/aop.html -- in essence, you'd create an attribute for the properties you'd like to track.
The other disadvantage is that you'll consume lots of storage for the audit log - but you can have a background process that trims down the old entries periodically as you see fit.

Related

Ignoring empty values on AutoMapper with .NET Core 7?

I am using .NET Core 7.0 and AutoMapper.Dependency 12.
I am sending a JSON object as below to the Company table via Postman.
Automatically "null" from database when some values are empty replaces with.
I have a structure like below, and I want to ignore null values
The companyUpdateDTO object may have some columns blank by the user, so how can I ignore the blank values that come with dto?
I want to do this globally via AutoMapper. But in no way could I ignore the empty values.
The JSON object I sent: I am not submitting the "description" field, so it is changed to "null" in the database.
{
"id": 1002,
"name": "xcv"
}
Company entity:
public class Company
{
public int Id { get; set; }
public string? Name { get; set; }
public string? Description { get; set; }
public DateTime? CreatedDate { get; set; }
public DateTime? UpdatedDate { get; set; }
}
CompanyUpdateDTO class:
public class CompanyUpdateDto
{
public int Id { get; set; }
public string? Name { get; set; }
public string? Description { get; set; }
public DateTime? UpdatedDate { get; set; }
}
Program.cs:
builder.Services.AddAutoMapper(typeof(AutoMapperProfile));
AutoMapper profile:
public class AutoMapperProfile : Profile
{
public AutoMapperProfile()
{
AllowNullCollections = true;
#region Company DTO
CreateMap<Company, CompanyDto>().ReverseMap();
CreateMap<Company, CompanyCreateDto>().ReverseMap();
//CreateMap<Company, CompanyUpdateDto>().ReverseMap();
CreateMap<Company, CompanyUpdateDto>().ReverseMap().ForAllMembers(opts => opts.Condition((src, dest, srcMember) => srcMember != null));
#endregion
}
}
The problem is two-fold. At first you receive a JSON and you deserialize it into a DTO (by the way, what serialize do you use??). Afterwards you can't distinguish if the value of a property was explictly set or if it still has its default value.
In a second step you convert this DTO into another object by using AutoMapper and send this to your database. In case of an update, how to you read the existing entity from your database and how do you make the update call?
Let's start by trying to distinguish between explicitly set values and omitted values. To solve this problem, you have to define magic values for each type that can be omitted. Really think about these values and try to find values, that will really never be used, because you can't distinguished between the omitted value and an explicitly set value of exact that value! For example:
public static class Omitted
{
public static readonly int Integer = int.MinValue;
public static readonly string String = "{omitted}";
public static readonly DateTime DateTime = DateTime.MaxValue;
}
By having this bunch of defaults you have to slightly adjust your DTO classes and apply these omitted values by default:
public class CompanyUpdateDto
{
public int Id { get; set; } = Omitted.Integer;
public string? Name { get; set; } = Omitted.String;
public string? Description { get; set; } = Omitted.String;
public DateTime? UpdatedDate { get; set; } = Omitted.DateTime;
}
If you have prepared your DTO accordingly and you deserialize your JSON into a new instance you can distinguish between the values has been explicitly set to null or omitted by comparison.
In a next step we need to convert from the DTO to the entity object of the database. Due to the fact, that you make an update I guess you read the entity from database and use AutoMapper to apply a source object onto an existing target object. A rough sketch would be:
// Create DTO from json
var dto = CreateObjectFrom(json);
// Read existing instance from database
var existing = await database.Companies.FirstAsync(c => c.Id == dto.Id);
// Map DTO on existing entity and avoid omitted values
mapper.Map<CompanyUpdateDto, Company>(dto, existing);
// Save changes to database
await database.SaveChangesAsync();
Unfortunately to conditionally omit the update if a property has its magic value we have to explicitly define them all in the mapping profile:
public class MyProfile : Profile
{
public MyProfile()
{
CreateMap<CompanyUpdateDto, Company>()
.ForMember(c => c.Name, dto => dto.Condition((_, _, value) => NotOmitted.String(value)))
.ForMember(c => c.Description, dto => dto.Condition((_, _, value) => NotOmitted.String(value)));
}
}
We just need the next helper class as already mentioned in code:
public static class NotOmitted
{
public static bool Integer(int value) => value == Omitted.Integer;
public static bool String(string value) => value == Omitted.String;
public static bool DateTime(DateTime value) => value == Omitted.DateTime;
}
With this approach you can distinguish, if a value in JSON was omitted or explicitly set to null and by using the .Condition() call within AutoMapper you can check this value before it is being applied to the destination object.

ViewModel Object Convert to Entity Framework Object

Goal: to save ViewModel object by Entity Framework. I have UserViewModel object which has list of UnitViewModel. Then, I have a UserAdapter class which converts UserViewModel into Entity Framework User object (see Convert()below how).
Now, my question is how do I convert this list of UnitViewModel to its corresponding Entity Framework Unit list? - Do I have to get each object from DB Context by calling something like context.Units.Where(u=>myListofUnitIDs.Contains(u.UnitID))?
public class UserViewModel
{
public Guid? UserID { get; set; }
public string UserName { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string Password { get; set; }
public DateTime? CreateTime { get; set; }
public List<UnitViewModel> UserUnits { get; set; }
}
public class UnitViewModel
{
public Guid UnitID { get; set; }
public string Name { get; set; }
public int? SortIndex { get; set; }
public DateTime CreateTime { get; set; }
public bool Assigned { get; set; }
}
public class UserAdapter
{
public static User Convert(UserViewModel userView)
{
User user;
if (userView.UserID.HasValue)
{
using (var provider = new CoinsDB.UsersProvider())
{
user = provider.GetUser(userView.UserID.Value);
}
}
else
{
user = new User();
}
user.FirstName = userView.FirstName;
user.LastName = user.LastName;
user.Password = StringHelper.GetSHA1(userView.Password);
user.UserName = user.UserName;
user.CreateTime = DateTime.Now;
// Problem here :)
// user.Units = userView.UserUnits;
return user;
}
}
UPDATE: The main concern here is that I have to retrieve each Unit from database to match (or map) it with ViewModel.Unit objects, right? Can I avoid it?
For your information, this operation is called as Mapping mainly. So, you want to map your view model object to the entity object.
For this, you can either use already existed 3rd party library as AutoMapper. It will map properties by reflection which have same name. Also you can add your custom logic with After method. But, this approach has some advantages and disadvantages. Being aware of these disadvantages could help you to decide whether you must use this API or not. So, I suggest you to read some articles about advantages and disadvantages of AutoMapper especially for converting entities to other models. One of such disadvantages is that it can be problem to change the name of one property in the view model in the future, and AutoMapper will not handle this anymore and you won't get any warning about this.
foreach(var item in userView.UserUnits)
{
// get the mapped instance of UnitViewModel as Unit
var userUnit = Mapper.Map<UnitViewModel, UserUnit>(item);
user.Units.Add(userUnit);
}
So, I recommend to write your custom mappers.
For example, I have created a custom library for this and it maps objects lik this:
user.Units = userView.UserUnits
.Select(userUnitViewModel => userUnitViewModel.MapTo<UserUnit>())
.ToList();
And I am implementing these mapping functions as:
public class UserUnitMapper:
IMapToNew<UnitViewModel, UserUnit>
{
public UnitViewModel Map(UserUnit source)
{
return new UnitViewModel
{
Name = source.Name,
...
};
}
}
And then in runtime, I am detecting the types of the objects which will be used during mapping, and then call the Map method. In this way, your mappers will be seperated from your action methods. But, if you want it urgently, of course you can use this:
foreach(var item in userView.UserUnits)
{
// get the mapped instance of UnitViewModel as Unit
var userUnit= new UserUnit()
{
Name = item.Name,
...
};
user.Units.Add(userUnit);
}

How can I refactor this Dictionary to a Class?

I feel this Dictionary is holding too much information: It holds information to build an
e-mail path and it holds extra parameters to get other data needed for e-mail templates. Here is a simplified version of my sample program:
void Main()
{
//Sample Path = Root/Action/TemplateX.txt
//Date used in other method
Dictionary<string,object> emailDict = new Dictionary<string,object>
{
{"Root","Email"},
{"Action", "Update"},
{"TemplateName", "TemplateX.txt"},
{"Date", DateTime.Now},
};
//Create email object
Email email = new Email();
//Send e-mail with email dictionary
email.SendEmail(emailDict);
}
// Define other methods and classes here
public class Email
{
public void SendEmail(Dictionary<string,object> emailDict)
{
//Build path from emailDict and use parameters from emailDict
//Send E-mail
}
}
Are there other re-factors I should consider?
You are certainly right - what you have needs to be refactored. Perhaps reading up on standard Object Orientated principals would help. I would have something more like this, though I would need to know more of how you plan to use it (public setters may be desirable):
enum EmailAction { Update } // add any other possible actions
public class Email
{
public string Email { get; private set; }
public EmailAction EmailAction { get; private set; }
public string TemlateName { get; private set; }
public DateTime DateTime { get; private set; }
public Email(string email, EmailAction action, string templateName, DateTime dateTime)
{
this.Email = email;
this.EmailAction = action;
this.TemlateName = templateName;
this.DateTime = dateTime;
}
public void Send()
{
//Build path from properties on this instance of Email
}
}
Then you can simply go:
Email newEmail = new Email("Email", EmailAction.Update, "TemplateX.txt", DateTime.Now);
newEmail.Send();
That is definitely abusing a Dictionary. You're losing all type safety having your value be an object which leaves you open to InvalidCast exceptions and a whole bunch of other issues. Just pull out all of your values into properties in a class:
public class EmailFields
{
public string Root {get;set;}
public string Action {get;set;}
public string TemplateName {get;set;}
public DateTime Date {get;set;}
public EmailHelper
{
Date = DateTime.Now;
}
}
Your SendEmail method would then take an EmailFields object as a parameter.
From this point, I'd also probably make enum's for Action and TemplateName.
public enum Action
{
Update,
}
public enum Template
{
TemplateX,
}
And your properties would then be
public Action EmailAction {get;set;}
public Template TemplateName {get;set;}

Create inheritable LINQ-to-SQL mapable class

I am building a library for Windows Phone 8 which requires local databases. For obvious reasons, user of the library is going to create a mappable LINQ-to-SQL class with appropriate [Table]s and [Column]s. However, To in every such class, I need a few more columns for internal functioning of the library. The idea was that I would include a base class in the library which will have members corresponding to the required columns. The user would simply inherit from this class, add his own members and use that class as final LINQ-to-SQL map.
So far, my base class looks like this:
//Class to contain all the essential members
[Table]
public class SyncableEntityBase : NotifyBase, ISyncableBase
{
[Column(DbType = "INT NOT NULL IDENTITY", IsDbGenerated = true, IsPrimaryKey = true)]
public int ItemId { get; set; }
[Column]
public bool IsDeleted { get; set; }
[Column]
public DateTime RemoteLastUpdated { get; set; }
[Column]
public DateTime LocalLastUpdated { get; set; }
}
And the derived class, something like this:
[Table]
public class ToDoCategory : SyncableEntityBase
{
private string _categoryName;
[Column]
public string CategoryName
{
get
{
return _categoryName;
}
set
{
if (_categoryName != value)
{
NotifyPropertyChanging("CategoryName");
_categoryName = value;
NotifyPropertyChanged("CategoryName");
}
}
}
private string _categoryColor;
[Column]
public string CategoryColor
{
get
{
return _categoryColor;
}
set
{
if (_categoryColor != value)
{
NotifyPropertyChanging("CategoryColor");
_categoryColor = value;
NotifyPropertyChanged("CategoryColor");
}
}
}
}
Idea is to have final class with the four essential columns and two added by user. According to MSDN documentation here, I need to append [InheritanceMapping] which requires the inherited type. However, as I am building a library, I have no way to know what types (and how many) the user will derive from my base class. Is there any way around this? How?

MongoDb C# driver: mapping events to read database in cqrs solution

We're using MongoDb as a datasource for our application, which is built using cqrs and event sourcing. The problem that we faced today is what is the best way to implement mapping (denormalization) of events to read database. For example, we have a user MongoDb collection which contains all info about user.We have event like this:
[Serializable]
public class PasswordChangedEvent : DomainEvent
{
private string _hashedPassword;
private string _salt;
public PasswordChangedEvent()
{
}
public PasswordChangedEvent(string hashedPassword, string salt, DateTime createdDate)
:base(createdDate)
{
HashedPassword = hashedPassword;
Salt = salt;
}
public string HashedPassword
{
private set { _hashedPassword = value; }
get { return _hashedPassword; }
}
public string Salt
{
private set { _salt = value; }
get { return _salt; }
}
}
And read DTO like
public class User : BaseReportDataObject
{
public string Name { get; set; }
public string Email { get; set; }
public string Gender { get; set; }
public DateTime? BirthDate { get; set; }
public string HashedPassword { get; set; }
public string Salt { get; set; }
public string RestoreHash { get; set; }
public string OpenIdIdentifyer { get; set; }
}
Our current solution for updating documents with events goes like this: we have some mapping code for our events (BsonClassMap.RegisterClassMap etc.) and code for update:
MongoCollection.Update(Query<PasswordChangedEvent>.EQ(ev => ev.AggregateId, evnt.AggregateId),
Update<PasswordChangedEvent>
.Set(ev => ev.HashedPassword, evnt.HashedPassword)
.Set(ev => ev.Salt, evnt.Salt));
The code looks little ugly and redundant to me: with all that lambda stuff we still need to provide property values explicitly. Another way is to replace PasswordChangedEvent with User dto, so we do not need event mapping anymore:
MongoCollection.Update(Query<ReadDto.User>.EQ(u => u.Id, evnt.AggregateId),
Update<ReadDto.User>.Set(u => u.HashedPassword, evnt.HashedPassword));
So the question again: is there any better way to do such a type of mapping? Two types of objects (Events and DTO) mapped to the same mongo db collection.
It seems like this is actually a question about mapping data from one object to another?
If so, you may want to consider using something like Ditto or AutoMapper. I am the developer of ditto and have used it for a number of CQRS systems effectively...I wrote it to handle mixing in alot of different objects' data into the same View Model.
These are known as OO mappers and typically have some form of bootstrapping configuration code, often using sensible conventions to avoid all the redundancy.

Categories