Using DDD and following the clean architecture pattern and I'm a bit confused on where the ideal location is for configuring display properties for specific domain model ID's. That sounds confusing, I think I can best explain it with an example:
Here the domain model's business logic is simple: calculates a "scaled" value from an input, gain, and offset.
//Domain Model
public class Transducer
{
//Name is the ID
public string Name { get; set; }
public double Gain { get; set; }
public double Offset { get; set; }
public double RawValue { get; set; }
public double ScaledValue { get; private set; }
public double CalculateScaledValue(double RawValue)
{
ScaledValue = (Gain * RawValue) + Offset;
return ScaledValue;
}
}
We have a use case that coordinates user actions with the domain models and manages persistence. The details here an unimportant so I've only included an example interface:
//implementation of execution of business logic and persistance would go in the implentation, details left out for this example
public interface ITransducerUseCase
{
IEnumerable<string> GetAllTransducerNames();
void AddNewTransducer(string Name, double Gain, double Offset);
void SetGain(string Name, double Gain);
void SetOffset(string Name, double Offset);
void SetRawValue(string Name, double Raw);
double GetScaledValue(string Name);
}
The use case is used by the controller to coordinate the use cases with a view or other controller. This specific controller allows the viewing of all the transducer names and can change their Gain property.
public class Controller
{
ITransducerUseCase _TransducerUseCase;
//these are sent to the view to be displayed
public Dictionary<string, double> _transducerScaledValues = new Dictionary<string, double>();
public Controller(ITransducerUseCase TransducerUseCase)
{
_TransducerUseCase = TransducerUseCase;
//Get all the names and populate the dictionary to display.
foreach (var transducerName in _TransducerUseCase.GetAllTransducerNames())
_transducerScaledValues.Add(transducerName, _TransducerUseCase.GetScaledValue(transducerName));
}
//bound to the view
public string SelectedName { get; set; }
//bound to the view, a property for setting a new gain value
public double Gain { get; set; }
public void OnButtonClick()
{
//update the gain
_TransducerUseCase.ChangeGain(SelectedName, Gain);
//get the new scaled value
_transducerScaledValues[SelectedName] = _TransducerUseCase.GetScaledValue("PumpPressure");
}
}
That's the scaffolding for this question. Here is the new requirement:
We want to have an application level configuration setting for the
"number of decimal places" that is displayed for the ScaledValue of
Transducer on an identity basis. So a transducer with Id
"PumpPressure" could have a different value of DisplayRounding than
the transducer with the name "PumpTemperature".
This setting must be application wide (any time the value is
displayed, use this setting). This setting could also be used if the
ScaledValue was ever logged to a file, so it's a cross cutting
business need.
The solutions I've thought of:
Placing a property in the Domain Model and returning it through the layers to the view. This does not seem like a logical place because the DisplayRounding property does not have any relevance to the business logic.
public class Transducer
{
//This seems like an SRP violation
public int DisplayRounding { get; set; }
//Name is the ID
public string Name { get; set; }
public double Gain { get; set; }
public double Offset { get; set; }
public double ScaledValue { get; private set; }
public double CalculateScaledValue(double RawValue)
{
ScaledValue = (Gain * RawValue) + Offset;
return ScaledValue;
}
}
If not there, then where?
Could we put it in a separate Domain Model without any business logic? Persistence could be managed by the same Use Case class or a separate one.
public class TransducerDisplaySettings
{
public int Rounding { get; set; }
//plus other related properties
}
Pros: It separates out the concerns better than having one combined model.
Cons: The model does not have any business logic, is this okay?
We've also considered managing these settings completley on the outer layers with some sort of service.
Pros: No domain models without business logic
Cons: Would probably be tied to a specific framework?
Are are there more pros/cons I'm missing? Is one approach obviously better than the other? Is there an approach that I completely missed? Thanks!
The central decision you would have to make is whether the display rounding is an aspect of your applications business logic or "just an aspect of display".
In case you consider it as important for your business logic it should be modeled with your entities.
In case you consider it just as an aspect of "presenting values to the user" (so not relevant for business rules) it should be stored in a separate repository or service and then applied by the "presenter".
[table("NameTable")]
public class Transducer
{
//Name is the ID
[Key] //is Key from table
public string Name { get; set; }
public double Gain { get; set; }
public double Offset { get; set; }
public double RawValue { get; set; }
public double ScaledValue { get; private set; }
public double CalculateScaledValue(double RawValue)
{
ScaledValue = (Gain * RawValue) + Offset;
return ScaledValue;
}
}
Related
I am having some issues figuring out how the correct way to properly model a many-to-many relationship in my realm, namely around the fact that realm objects are always live.
So the model in question here revolves around two objects: Event and Inventory. An Event can have multiple inventory items assigned to it (think chairs, plates, forks, etc.), and an inventory item can be assigned to multiple events. When we assign it to an event we define how many of said item we want to assign to the event. However this is where the problem arises, since realm objects are always live and the object types are the same, whatever data Events has will affect my inventory data row as well.
Big picture is that I want to show how many items are assigned for each up coming event when I go into my Inventory detail view. So for example I may have 50 total chairs, I've assigned 40 for an event tomorrow, this means I cannot assign another 20 if someone tried to schedule an event that day as well.
My Realm objects look as follows:
public class Event : RealmObject
{
[PrimaryKey]
public string EventId { get; set; }
[Indexed]
public string VenueId { get; set; }
[Indexed]
public string Name { get; set; }
public DateTimeOffset DateOfEventUTC { get; set; }
public IList<Inventory> Items { get; }
}
public class Inventory : RealmObject
{
[PrimaryKey]
public string InventoryId { get; set; }
[Indexed]
public string VenueId { get; set; }
public Category Category { get; set; }
public int Count { get; set; }
public string Name { get; set; }
public string Description { get; set; }
[Backlink(nameof(Event.Items))]
public IQueryable<Event> Events { get; }
}
I then try to do what I want (namely showing how many of the item are assigned for that event) in my VM as so:
var item = unitOfWork.InventoryRepository.GetById(inventoryId);
var nextMonth = DateTime.UtcNow.AddMonths(1);
AssignedEvents = item.Events
.Where(x => x.DateOfEventUTC >= DateTime.UtcNow && x.DateOfEventUTC <= nextMonth)
.ToList()
.Select(x => new AssignedEventModel
{
DateOfEventUTC = x.DateOfEventUTC.DateTime,
Name = x.Name,
AssignedItems = x.Items.First(z => z.InventoryId == inventoryId).Count
})
.ToList();
Unfortunately, this is where the problem arises. I tried applying the [Ignored] tag as was recommended in the realm docs so that the item will no longer be persisted. This unfortunately did not solve my issue. I am still new to realm and I am much more familiar with SQL than NoSQL
I struggle to see how this could work in SQL either, but I'm not an expert in that so I may miss some details that would allow this to work in the way you structured it.
Coming back to our case: the problem has little to do with Realm being live, but more to do with the way you structured your domain models.
If you use the same "Inventory" model to do 2 things:
keep track of the total amount of each item
keep track of the amount of each inventory item used in a specific event
you'll have problems with what Count really represents.
Creating a third model would solve all your problems.
Inventory => for the overall amount of an item
Event => representing the event and all its data
EventInventory => representing the amount of an item used in that event
Not having much information about your project and your other models (see AssignedEventModel etc) I could suggest something along these lines
class Event : RealmObject
{
[PrimaryKey]
public string EventId { get; set; }
// ... other fields you need ...
public DateTimeOffset DateOfEventUTC { get; set; }
[Backlink(nameof(EventInventory.Event))]
public IList<EventInventory> Items { get; }
}
class EventInventory : RealmObject
{
public Inventory Inventory { get; set; }
public int Count { get; set; }
public Event Event { get; set; }
}
class Inventory : RealmObject
{
[PrimaryKey]
public string InventoryId { get; set; }
// ... other fields you need ...
public int TotalCount { get; set; }
[Backlink(nameof(EventInventory.Inventory))]
public IQueryable<EventInventory> EventInventories { get; }
}
Then in your Inventory's VM
var inventory = unitOfWork.InventoryRepository.GetById(inventoryId);
var inUse = inventory.EventInventories
.Where(x => /*...*/)
.Sum(x => x.Count);
// your databind count that want to show under Inventory's View
remainingCount = inventory.TotalCount - InUseCount;
So basically, now you can calculate how much is left available of a certain InventoryItem in a certain time frame. With these models you should be able to create your AssignedEventModel if you need to.
I hope this helps.
On a side node, I noticed that you are using unitOfWork and repository pattern (at least, so it seems). Although it may look like a great idea, it is generally discoraged to be used when working with Realm. This is simply because you are going to miss out on some of the powerful feature of Realm.
You can read more about this here in the "Repository" section of the answer.
I've created a class that represent a component. This component has a width,height,x-Coordinate,y-Coordinate, etc. When I manipulate the width,height,x, and y, I want to keep the logic within the class. But there is an interface object within the Component Class that has similar values. This interface can be used to talk to different types of CAD software. The Shape interface can be null though.
So my question is what would be the best approach for this? In the example below, when I change "Y", should I check for null in the shape interface? Or maybe the Component Class has event handlers and the Shape Interface should register to them. So what would be best practice for designing this approach and what would give the best performance?
Appreciate it!
public class Component
{
private double _y;
public IShape Shape { get; set; }
public string Name { get; set; }
public double Width { get; set; }
public double Height { get; set; }
public double X { get; set; }
public double Y
{
get => _y;
set
{
_y = value;
if (Shape != null) Shape.Y = value;
}
}
public void Update_Shape()
{
//used to update the Shape Interface after it is assigned
}
}
public interface IShape
{
string Name { get; set; }
double Width { get; set; }
double Height { get; set; }
double X { get; set; }
double Y { get; set; }
}
UPDATE: To give more details, my interface will be able to talk to Microsoft Visio, and AutoCad. They are only meant to be used as a visual representation of the data, the are not in control of how many shapes, or where they are positioned. So in my application, the user can move, or change width/height within the application. If they have Visio open at the time, I want it to update Visio shapes as well. If it isn't open, then it doesn't matter(it will end up being updated later on). Same goes for AutoCad.
The best practice in this situation depends on what your design goals are.
If you want to automatically update IShape and performance is critical then manually writing out your setters with a null check is going to give you both. Having an event that the IShape subscribes to causes you to have to invoke the event which is more expensive than checking null. And this has the benefit of keeping the mess inside the class as you only need to assign myComponent.X = 20;
Having an event has it's benefits. If you look up the Observer Pattern you can find lots of good reads on this. If you have more than one IShape that would subscribe to your Component, say from both Visio and AutoCad at the same time this would be the way to go.
Now in terms of performance, if you're update less than a few thousand components per second and you want cleaner code I would just call Update_Shape() when you want to synchronize the values. If you are assigning multiple values at the same time you can wrap them in an action that will automatically synchronize the values after it completes.
var c = new Component();
c.Shape = new Shape();
c.UpdateShapes(s => {
s.Height = 100;
s.Width = 100;
s.X = 5;
});
public class Component
{
public IShape Shape { get; set; }
public string Name { get; set; }
public double Width { get; set; }
public double Height { get; set; }
public double X { get; set; }
public double Y { get; set; }
public void UpdateShapes(Action<Component> update)
{
update(this);
SyncronizeShapes();
}
public void SyncronizeShapes()
{
if (Shape != null)
{
Shape.Name = Name;
Shape.Width = Width;
Shape.Height = Height;
Shape.X = X;
Shape.Y = Y;
}
}
}
I have to solve the problem where i need different calculation for each sensor type(I need to decide which type to instantiate at run time).
Let me show you in an example:
1. From database table i get this result:
SensorType RawValue ADCGain R0Value I0Value ABCValue
1 100 0.1 NULL NULL NULL
1 150 0.2 NULL NULL NULL
2 30 NULL 1 2 2
2 15 NULL 5 5 6
Let say sensor type 1 is concrete type AISensor and inhertis from base class and type 2 is Pt100Tempsensor and inhertis from the same base class.
Here is the class definiton in C#:
public abstract class Sensor
{
public abstract int Id { get; set; }
public abstract string Code { get; set; }
public abstract string Name { get; set; }
public abstract double GetCalculatedValue(int rawValue);
}
public class Pt100Tempsensor : Sensor
{
public int R0Value { get; set; }
public int I0value { get; set; }
public int ABCValue { get; set; }
public override int Id { get; set; }
public override string Code { get; set; }
public override string Name { get; set; }
public override double GetCalculatedValue(int rawValue)
{
return ((R0Value * I0value) / ABCValue) * rawValue;
}
}
public class AISensor : Sensor
{
public int AdcGain { get; set; }
public override int Id { get; set; }
public override string Code { get; set; }
public override string Name { get; set; }
public override double GetCalculatedValue(int rawValue)
{
return rawValue * AdcGain;
}
}
Now i am wondering what is the best way to instantiate objects at run time to achieve that if i add new sensor type, i don't need to change an existing code(like in simple factory method "pattern").
Thank's for any help.
Use factory that creates concrete sensors based on sensor type ID
Easiest way is a factory method with a switch based on the ID that makes a different sensor depending on the ID number.
If you want to be able to add sensors without changing any other code (i.e. without altering the factory method) then you need to use reflection to (1) discover all available sensor types and (2) construct the right one based on its ID. You could so this with an attribute eg:
[Sensor(42)]
public class Pt100Tempsensor : Sensor
{
....
Where 42 is the ID
But to be honest, it's only worth the effort of doing this if you are going to really have loads of sensor types.
I second the suggestion for a simple factory pattern.
But: If you really need to add new sensors without touching existing code, I can only think of a "plugin system". That means:
You have a common interface.
You have some sort of configuration where you can map IDs to Implementations.
You have a Factory that can detect implementations and create instances based on above mentioned configuration.
For reference: Microsoft has a little example code. See https://code.msdn.microsoft.com/windowsdesktop/Creating-a-simple-plugin-b6174b62
You can use an ORM (Entity Framework, NHibernate) to reach this. When you create a new Sensor class, you have to modify the code, but only at the database mapping part.
That table looks like as a table with a TPH mapping strategy where the SensorType is the discriminator column.
What are good strategies for rebuilding/enriching a nested or complex ViewModel?
A common way to rebuild a flat ViewModel is shown here
But building and rebuilding a nested ViewModel using that method is too complex.
Models
public class PersonInfo
{
public int Id { get; set; }
public string Name { get; set; }
public int Nationality { get; set; }
public List<Address> Addresses { get; set; }
}
public class Address
{
public int AddressTypeID { get; set; }
public string Country { get; set; }
public string PostalCode { get; set; }
}
public class AddressType
{
public int Id { get; set; }
public string Description { get; set; }
}
view models
public class PersonEditModel
{
public int Id { get; set; }
public string Name { get; set; } //read-only
public int Nationality { get; set; }
public List<AddressEditModel> Addresses { get; set; }
public List<SelectListItem> NationalitySelectList { get; set; } //read-only
}
public class AddressEditModel
{
public int AddressTypeId { get; set; }
public string AddressDescription { get; set; } //read-only
public string Country { get; set; }
public string PostalCode { get; set; }
public List<SelectListItem> CountrySelectList { get; set; } //read-only
}
actions
public ActionResult Update(int id)
{
var addressTypes = service.GetAddressTypes();
var person = service.GetPerson(id);
var personEditModel= Map<PersonEditModel>.From(person);
foreach(var addressType in addressTypes)
{
var address = person.Addresses.SingleOrDefault(i => i.AddressTypeId == addressType.Id)
if(address == null)
{
personEditModel.Addresses.Add(new AddressEditModel
{
AddressTypeId = addressType.Id
});
}
else
{
personEditModel.Addresses.Add(Map<AddressEditModel>.From(address));
}
}
EnrichViewModel(personEditModel, person, addressTypes); //populate read-only data such as SelectList
return Index(personEditModel);
}
[HttpPost]
public ActionResult Update(PersonEditModel editModel)
{
if(!ModelState.IsValid)
{
var person = service.GetPerson(editModel.Id);
var addressTypes = service.GetAddressTypes();
EnrichViewModel(editModel, person, addressTypes);
return View(editModel);
}
service.Save(...);
return RedirectToAction("Index");
}
//populate read-only data such as SelectList
private void EnrichViewModel(PersonEditModel personEditModel, Person person, IEnumerable<AddressType> addressTypes)
{
personEditModel.Name = person.Name;
personEditModel.NationalitySelectList = GetNationalitySelectList();
foreach(var addressEditModel in personEditModel.Addresses)
{
addressEditModel.Description = addressTypes.Where(i => i.Id = addressEditModel.AddressTypeId).Select(i => i.Description).FirstOrDefault();
addressEditModel.CountrySelectListItems = GetCountrySelectList(addressEditModel.AddressTypeId);
}
}
My code for building and rebuilding the ViewModels (PersonEditModel and AddressEditModel) is too ugly. How do I restructure my code to clean this mess?
One easy way is to always build a new view model instead of merging/rebuilding since MVC will overwrite the fields with the values in ModelState anyway
[HttpPost]
public ActionResult Update(PersonEditModel editModel)
{
if(!ModelState.IsValid)
{
var newEditModel = BuildPersonEditModel(editModel.Id);
return View(newEditModel);
}
but I'm not sure that this is a good idea. Is it? Are there other solutions besides AJAX?
I'm going to tackle your specific pain points one-by-one and I'll try to present my own experience and likely solutions along the way. I'm afraid there is no best answer here. You just have to pick the lesser of the evils.
Rebuilding Dropdownlists
They are a bitch! There is no escaping rebuilding them when you re-render the page. While HTML Forms are good at remembering the selected index (and they will happily restore it for you), you have to rebuild them. If you don't want to rebuild them, switch to Ajax.
Rebuilding Rest of View Model (even nested)
HTML forms are good at rebuilding the whole model for you, as long as you stick to inputs and hidden fields and other form elements (selects, textarea, etc).
There is no avoiding posting back the data if you don't want to rebuild them, but in this case you need to ask yourself - which one is more efficient - posting back few extra bytes or making another query to fetch the missing pieces?
If you don't want to post back the readonly fields, but still want the model binder to work, you can exclude the properties via [Bind(Exclude="Name,SomeOtherProperty")] on the view model class. In this case, you probably need to set them again before sending them back to browser.
// excluding specific props. note that you can also "Include" instead of "Exclude".
[Bind(Exclude="Name,NationalitySelectList")]
public class PersonEditModel
{
...
If you exclude those properties, you don't have to resort to hidden fields and posting them back - as the model binder will simply ignore them and you still will get the values you need populated back.
Personally, I use Edit Models which contain just post-able data instead of Bind magic. Apart from avoiding magic string like you need with Bind, they give me the benefits of strong typing and a clearer intent. I use my own mapper classes to do the mapping but you can use something like Automapper to manage the mapping for you as well.
Another idea may be to cache the initial ViewModel in Session till a successful POST is made. That way, you do not have to rebuild it from grounds up. You just merge the initial one with the submitted one in case of validation errors.
I fight these same battles every time I work with Forms and finally, I've started to just suck it up and go fully AJAX for anything that's not a simple name-value collection type form. Besides being headache free, it also leads to better UX.
P.S. The link you posted is essentially doing the same thing that you're doing - just that its using a mapper framework to map properties between domain and view model.
I'm trying to improve my application's design, So instead of calling the DataAccess layer from the presentation layer. I'll try to implement a save method from my object in the BusinessObjects layer. but I'm not sure how to pass the object or it's properties through the layers. for example in my old design I just create an instance of my object in the presentation layer and assign it's properties then just call the DataAccess method for saving this info in the database and pass the object as a parameter as illustrated.
DAL
public static void SaveObject(Object obj)
{
int id = obj.id;
string label = obj.label;
}
PL
Object obj = new Object();
obj.id = 1;
obj.label = "test";
DAL.SaveObject(obj);
but I just want to do this in my PL
Object obj = new Object();
obj.id = 1;
obj.label = "test";
obj.SaveObject();
Is that possible? and how would my DAL look like ?
Edit: Explaining my requirements
I'll base my code right now on a very important object in my system.
BusinessEntitiesLayer uses BusinessLogic Layer
namespace BO.Cruises
{
public class Cruise
{
public int ID
{ get; set; }
public string Name
{ get; set; }
public int BrandID
{ get; set; }
public int ClassID
{ get; set; }
public int CountryID
{ get; set; }
public string ProfilePic
{ get; set; }
public bool Hide
{ get; set; }
public string Description
{ get; set; }
public int OfficialRate
{ get; set; }
public string DeckPlanPic
{ get; set; }
public string CabinsLayoutPic
{ get; set; }
public List<Itinerary> Itineraries
{ get; set; }
public List<StatisticFact> Statistics
{ get; set; }
public List<CabinRoomType> RoomTypesQuantities
{ get; set; }
public List<CabinFeature> CabinFeatures
{ get; set; }
public List<CruiseAmenity> Amenities
{ get; set; }
public List<CruiseService> Services
{ get; set; }
public List<CruiseEntertainment> Entertainment
{ get; set; }
public List<CustomerReview> CustomerReviews
{ get; set; }
}
}
BusinessLogicLayer uses DataAccessLayer
Actually this layer is intended to be validating my object then call the DAL methods but I didn't implement any validation right now, so I'm just using it to call the DAL methods.
public static void Save(object cruise)
{
CruisesDAL.Save(cruise);
}
DataAccessLayer trying to reference BussinessEntities but it's giving me circular dependencies error!
It's supposed to receive the object and cast it as Cruise entity
public static void Save(object cruise)
{
Cruise c = cruise as Cruise;
//access the object c properties and save them to the database
}
Code sample from my project:
public static List<Cruise> GetCruisesList()
{
string commandText = "SELECT ID, Name + CASE Hide WHEN 1 Then ' (Hidden)' ELSE '' END AS Name FROM Cruises";
List<Cruise> cruises = new List<Cruise>();
Cruise cruise;
using (SqlConnection connection = new SqlConnection(ConnectionString))
{
using (SqlCommand command = new SqlCommand(commandText, connection))
{
connection.Open();
using (SqlDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
cruise = new Cruise();
cruise.ID = Convert.ToInt32(reader["ID"]);
cruise.Name = reader["Name"].ToString();
cruises.Add(cruise);
}
}
}
}
return cruises;
}
PresentationLayer uses BusinessEntities
Input controls (TextBoxes, DropDownList, etc)
When the save button is clicked I take all the values, create a Cruise object and call Cruise.Save();
You should avoid mixing the domain model with the persistence logic.
The examples given above would make a tight coupling solution.
In order to achieve the .SaveObject() you can make extension methods in the BL that would do the job.
BL.*
public static class ObjectPersistanceExtensions{
public static SaveObejct<T>(this IBaseEntity obj){
IObjectDal<T> _dal = AvailableSerices.Obtain<IObjectDal<T>>();
_dal.AddObject(obj);
_dal.Commit();
}
}
So in this way you can still add functionaries to the domain objects without coupling the logic in the domain objects.
Passing the object itself to the data layer is usually a bit funky. Instead, I recommend that you have the object do the talking to the data layer, and let the data layer do its thing.
internal static class DataLayer {
public static bool Update(int id, string label) {
// Update your data tier
return success; // bool whether it succeeded or not
}
}
internal class BusinessObject {
public int ID {
get;
private set;
}
public string Label {
get;
set;
}
public bool Save() {
return DataLayer.Update(this.ID, this.Label); // return data layer success
}
}
The reason you would do it this way, is because your data layer may not have a reference to your business object, and thus would have no idea what it is. You would not be able to pass the object itself. This is a usual scenerio because generally it is your business object assembly that references your data layer assembly.
If you have everything in the same assembly, than the above does not apply. Later on however, if you decide to refactor your data layer into its own module (which is often how it turns out, and is good design), passing the object will break because then it loses its reference to your business object.
Either way you do it, you should know that you will have to update both your object and your data layer if you add a new field or member. That's just a given when you add something new.
I may write a blog on some good design practices for this, but that is my recommendation.
if you follow this pattern you will have the saving logic inside the object definition itself, so when you call from PL:
obj.SaveObject();
this will happen in the Object itself:
public void SaveObject()
{
DAL.SaveObject(this);
}
and your DAL stays the same as you shown above.
it's a matter of design, I would not put the logic of saving inside the object but I would have a BusinessManager or an ObjectMapper to read from DAL and save to DAL.
in general is a good practice to have read or load and Save in the same place, BusinessObject or BusinessManager, but together so you find them easily and update both in a breeze if you add or change a field.