I have the following class that holds the start and end values of a task
public class WorkPackage
{
public int Id { get; set; }
public string Name { get; set; }
public DateTime FromTime { get; set; }
public DateTime ToTime { get; set; }
}
I also show the saved values as follows(grouped base on FromTime)
End user can
Move one WorkPackage from a month to another month by Drag & Drop.
Split one WorkPackage to two or more WorkPackages base on some rules.
So I have following methods:
public void MoveWorkPackageToMonth(WorkPackage wp, int month)
{
....
}
public List<WorkPackage> SplitWorkPackage(WorkPackage wp)
{
....
}
Each time the user makes a lot of changes to the WorkPackage list, but the WorkPackage list may be rebuilt every few days for business reasons, and user want to do same things on the re-created WorkPackage list, so I need to save the user's works in the database to repeat the same works on the re-created WorkPackage list.
I want to add something like a scripting language to save user works as a string, something like this:
"Move(WP1){From(January) To(April)};SPLIT(WP5);"
Is there any library to help me? or I have to define my own custom business language?(I used .Net4)
The NuGet Package ITVComponents.Scripting.CScript provides a scripting engine that supports interpretet as well as compiled code.
The Syntax is derived from JavaScript with some special extensions to enable dynamic type loading and native script parts.
Unfortunately there is no documentation at the moment, but i'm working on it..
Basically, for what you want to achive, you could use the following code:
ExpressionParser.ParseBlock(yourScript, yourObject, s => DefaultCallbacks.PrepareDefaultCallbacks(s.Scope, s.ReplSession));
where yourObject would be the objects that implement the methods MoveWorkPackageToMonth,
SplitWorkPackage, a method to find a Package (FindPackage?) and the months als integer Properties and your script would look something like
MoveWorkPackageToMonth(FindPackage("nameOfPackage1"), April); MoveWorkPackageToMonth(FindPackage("nameOfPackage2"), February); ...
this code will work interpreted, therefore, if you have plenty of actions to perform, it may be slow.
Related
My project is an online foods order app, the key feature of this app is the "Daily nutrients intake monitor". This monitor shows the differences of daily intake recommendation values of 30 types of nutrients vs the actual nutrients contains from the foods in user's shoppingcart.
I created 30 models base on those nutrients and each one of them has an InputData which inherits from a base class - NutrientInputDataBase, below is the example of Added sugar InputData class and the base class:
public class AddedSugarUlInputData : NutrientInputDataBase
{
[ColumnName(#"AddedSugar-AMDR-UL")]
public float AddedSugar_AMDR_UL { get; set; }
}
public class NutrientInputDataBase
{
[ColumnName(#"Sex")]
public float Sex { get; set; }
[ColumnName(#"Age")]
public float Age { get; set; }
[ColumnName(#"Activity")]
public float Activity { get; set; }
[ColumnName(#"BMI")]
public float BMI { get; set; }
[ColumnName(#"Disease")]
public float Disease { get; set; }
}
From the official documents:
https://learn.microsoft.com/en-us/dotnet/machine-learning/how-to-guides/serve-model-web-api-ml-net
i understood that i need to create a 'PredictionEnginePool' and i already know how to register the PredictionEnginePool in the application startup file.
My app logic is when user added or removed an item from the shoppingcart, the front end will request the api, the backend will get the user profile first(to obtain the input data for the prediction), then return a packaged objects which contains all 30 types of nutrients prediction results.
My question is, should i register the PredictionEnginePool for each one of the nutrient model individually in the Startup file? or in anyother effecient way which i haven't be awared of?
There's multiple ways for you to go about it.
Register each of your models PredictionEnginePool. The FromFile and FromUri methods allow you to specify a name for each of your models so when you use them to make predictions in your application you can reference them by name.
Save your model to a database as a blob. Then you can add logic on your application to load a specific model based on the criteria you specify. The downside to this is you'd have to fetch your models more dynamically rather than having a PredictionEnginePool ready to go.
I made a register page with dynamic form in Orchard CMS, and received new requirements of checking record count.
I have no idea about how to do this, I looked into the SubmissionAdminController.cs in Orchard.DynamicForms.Controllers folder, but still could not find a way.
I'm thinking to get the record count from my cshtml view page and check it in different parts, is it possible?
To get the record count of the stored submissions, inject or resolve an IRepository<Submission>, and use the Count() method to count all items. Note that the Count() method accepts an expression, which allows you to filter by form name for example. For reference, this is what the Submission class looks like:
namespace Orchard.DynamicForms.Models {
public class Submission {
public virtual int Id { get; set; }
public virtual string FormName { get; set; }
[StringLengthMax]
public virtual string FormData { get; set; }
public virtual DateTime CreatedUtc { get; set; }
}
}
When you have an IRepository<Submission>, this is how you would count all submissions in a form called "MyForm":
var count = submissionRepository.Count(x => x.FormName == "MyForm");
If you don't have a controller or custom part or anything to inject this IRepository into, then you could resolve the repository directly form your view like this:
#{
var submissionRepository = WorkContext.Resolve<IRepository<Submission>>();
var submissionCount = submissionRepository.Count(x => x.FormName == "MyForm");
}
Make sure to import the proper namespaces:
Orchard.DynamicForms.Models for Submission
Orchard.Data for IRepository<T>
However, if you need to display this number in multiple places, it's best to create a shape so that you can reuse it. Even better would be to not resolve the repository from the shape template directly, but via an IShapeTableProvider. The primary reason for that becomes clear when you start overriding your shape template or start providing shape alternates, both in which cases you'll have duplicate logic in all of your shape templates, which isn't very DRY of course. And there's the more philosophical issue of separation of concerns: you don't want data access code in your views. Rather, use that code from a controller, driver or shape table provider.
I recently started reading about rich domain model instead of anemic models. All the projects I worked on before, we followed service pattern. In my new new project I'm trying to implement rich domain model. One of the issues I'm running into is trying to decide where the behavior goes in (in which class). Consider this example -
public class Order
{
int OrderID;
string OrderName;
List<Items> OrderItems;
}
public class Item
{
int OrderID;
int ItemID;
string ItemName;
}
So in this example, I have the AddItem method in Item class. Before I add an Item to an order, I need to make sure a valid order id is passed in. So I do that validation in AddItem method. Am I on the right track with this? Or do I need create validation in Order class that tells if the OrderID is valid?
Wouldn't the Order have the AddItem method? An Item is added to the Order, not the other way around.
public class Order
{
int OrderID;
string OrderName;
List<Items> OrderItems;
bool AddItem(Item item)
{
//add item to the list
}
}
In which case, the Order is valid, because it has been created. Of course, the Order doesn't know the Item is valid, so there persists a potential validation issue. So validation could be added in the AddItem method.
public class Order
{
int OrderID;
string OrderName;
List<Items> OrderItems;
public bool AddItem(Item item)
{
//if valid
if(IsValid(item))
{
//add item to the list
}
}
public bool IsValid(Item item)
{
//validate
}
}
All of this is in line with the original OOP concept of keeping the data and its behaviors together in a class. However, how is the validation performed? Does it have to make a database call? Check for inventory levels or other things outside the boundary of the class? If so, pretty soon the Order class is bloated with extra code not related to the order, but to check the validity of the Item, call external resources, etc. This is not exactly OOPy, and definitely not SOLID.
In the end, it depends. Are the behaviors' needs contained within the class? How complex are the behaviors? Can they be used elsewhere? Are they only needed in a limited part of the object's life-cycle? Can they be tested? In some cases it makes more sense to extract the behaviors into classes that are more focused.
So, build out the richer classes, make them work and write the appropriate tests Then see how they look and smell and decide if they meet your objectives, can be extended and maintained, or if they need to be refactored.
First of all, every item is responsible of it's own state (information). In good OOP design the object can never be set in an invalid state. You should at least try to prevent it.
In order to do that you cannot have public setters if one or more fields are required in combination.
In your example an Item is invalid if its missing the orderId or the itemId. Without that information the order cannot be completed.
Thus you should implement that class like this:
public class Item
{
public Item(int orderId, int itemId)
{
if (orderId <= 0) throw new ArgumentException("Order is required");
if (itemId <= 0) throw new ArgumentException("ItemId is required");
OrderId = orderId;
ItemId = itemId;
}
public int OrderID { get; private set; }
public int ItemID { get; private set; }
public string ItemName { get; set; }
}
See what I did there? I ensured that the item is in a valid state from the beginning by forcing and validating the information directly in the constructor.
The ItemName is just a bonus, it's not required for you to be able to process an order.
If the property setters are public, it's easy to forget to specify both the required fields, thus getting one or more bugs later when that information is processed. By forcing it to be included and also validating the information you catch bugs much earlier.
Order
The order object must ensure that it's entire structure is valid. Thus it need to have control over the information that it carries, which also include the order items.
if you have something like this:
public class Order
{
int OrderID;
string OrderName;
List<Items> OrderItems;
}
You are basically saying: I have order items, but I do not really care how many or what they contain. That is an invite to bugs later on in the development process.
Even if you say something like this:
public class Order
{
int OrderID;
string OrderName;
List<Items> OrderItems;
public void AddItem(item);
public void ValidateItem(item);
}
You are communicating something like: Please be nice, validate the item first and then add it through the Add method. However, if you have order with id 1 someone could still do order.AddItem(new Item{OrderId = 2, ItemId=1}) or order.Items.Add(new Item{OrderId = 2, ItemId=1}), thus making the order contain invalid information.
imho a ValidateItem method doesn't belong in Order but in Item as it is its own responsibility to be in a valid state.
A better design would be:
public class Order
{
private List<Item> _items = new List<Item>();
public Order(int orderId)
{
if (orderId <= 0) throw new ArgumentException("OrderId must be specified");
OrderId = orderId;
}
public int OrderId { get; private set; }
public string OrderName { get; set; }
public IReadOnlyList<Items> OrderItems { get { return _items; } }
public void Add(Item item)
{
if (item == null) throw new ArgumentNullException("item");
//make sure that the item is for us
if (item.OrderId != OrderId) throw new InvalidOperationException("Item belongs to another order");
_items.Add(item);
}
}
Now you have gotten control over the entire order, if changes should be made to the item list, it has to be done directly in the order object.
However, an item can still be modified without the order knowing it. Someone could for instance to order.Items.First(x=>x.Id=3).ApplyDiscount(10.0); which would be fatal if the order had a cached Total field.
However, good design is not always doing it 100% properly, but a tradeoff between code that we can work with and code that does everything right according to principles and patterns.
I would agree with the first part of dbugger's solution, but not with the part where the validation takes place.
You might ask: "Why not dbugger's code? It's simpler and has less methods to implement!"
Well the reason is that the resulting code would be somewhat confusing.
Just imagine someone would use dbuggers implementation.
He could possibly write code like this:
[...]
Order myOrder = ...;
Item myItem = ...;
[...]
bool isValid = myOrder.IsValid(myItem);
[...]
Someone who doesn't know the implementation details of dbugger's "IsValid" method would simply not understand what this code is supposed to do.
Worse that that, he or she might also guess that this would be a comparison between an order and an item.
That is because this method has weak cohesion and violates the single responsibility principle of OOP.
Both classes should only be responsible for validating themself.
If the validation also includes the validation of a referenced class (like item in Order), then the item could be asked if it is valid for a specific order:
public class Item
{
public int ItemID { get; set; }
public string ItemName { get; set; }
public bool IsValidForOrder(Order order)
{
// order-item validation code
}
}
If you want to use this approach, you might want to take care that you don't call a method that triggers an item validation from within the item validation method. The result would be an infinite loop.
[Update]
Now Trailmax stated that acessing a DB from within the validation-code of the application domain would be problematic and that he uses a special ItemOrderValidator class to do the validation.
I totally agree with that.
In my opinion you should never access the DB from within the application domain model.
I know there are some patterns like Active Record, that promote such behaviour, but I find the resultig code always a tiny bit unclean.
So the core question is: how to integrate an external dependency in your rich domain model.
From my point of view there are just two valid solutions to this.
1) Don't. Just make it procedural. Write a service that lives on top of an anemic model. (I guess that is Trailmax's solution)
or
2) Include the (formerly) external information and logic in your domain model. The result will be a rich domain model.
Just like Yoda said: Do or do not. There is no try.
But the initial question was how to design a rich domain model instead of an anemic domain model.
Not how to design an anemic domain model instead of a rich domain model.
The resulting classes would look like this:
public class Item
{
public int ItemID { get; set; }
public int StockAmount { get; set; }
public string ItemName { get; set; }
public void Validate(bool validateStocks)
{
if (validateStocks && this.StockAmount <= 0) throw new Exception ("Out of stock");
// additional item validation code
}
}
public class Order
{
public int OrderID { get; set; }
public string OrderName { get; set; }
public List<Items> OrderItems { get; set; }
public void Validate(bool validateStocks)
{
if(!this.OrderItems.Any()) throw new Exception("Empty order.");
this.OrderItems.ForEach(item => item.Validate(validateStocks));
}
}
Before you ask: you will still need a (procedural) service method to load the data (order with items) from the DB and trigger the validation (of the loaded order-object).
But the difference to an anemic domain model is that this service does NOT contain the validation logic itself.
The domain logic is within the domain model, not within the service/manager/validator or whatever name you call your service classes.
Using a rich domain model means that the services just orchestrate different external dependencies, but they don't include domain logic.
So what if you want to update your domain-data at a specific point within your domain logic, e.g. immediately after the "IsValidForOrder" method is called?
Well, that would be problem.
If you really have such a transaction-oriented demand I would recommend not to use a rich domain model.
[Update: DB-related ID checks removed - persistence checks should be in a service]
[Update: Added conditional item stock checks, code cleanup]
If you go with Rich Domain Model implement AddItem method inside Order. But SOLID principles don't want you validation and other things inside this method.
Imagine you have AddItem() method in Order that validates item and recalculate total order sum including taxes. You next change is that validation depends on country, selected language and selected currency. Your next change is taxes depends on country too. Next requirements can be translation check, discounts etc. Your code will become very complex and difficult to maintenance. So I thing it is better to have such thing inside AddItem:
public void AddItem(IOrderContext orderItemContext) {
var orderItem = _orderItemBuilder.BuildItem(_orderContext, orderItemContext);
_orderItems.Add(orderItem);
}
Now you can test item creation and item adding to the order separately. You IOrderItemBuilder.Build() method can be like this for some country:
public IOrderItem BuildItem(IOrderContext orderContext, IOrderItemContext orderItemContext) {
var orderItem = Build(orderItemContext);
_orderItemVerifier.Verify(orderItem, orderContext);
totalTax = _orderTaxCalculator.Calculate(orderItem, orderContext);
...
return orderItem;
}
So you can test and use separately code for different responsibility and country. It is easy to mock each component, as well as change them at runtime depending on user choice.
To model a composite transaction, use two classes: a Transaction (Order) and a LineItem (OrderLineItem) class. Each LineItem is then associated with a particular Product.
When it comes to behavior adopt the following rule:
"An action on an object in the real world, becomes a service (method) of that object in an Object Oriented approach."
I am creating a web application that manages documents. These documents have stages. Users will be able to reject these documents from the current stage back to the previous stage.
So the flow will be like this Document stage one approved > Get next stage and set document stage to next stage > Document stage one REJECTED > Get previous stage and set document stage to previous stage.
Now what I need help with is how to manage the stages back and forth and what is the best way to setup my entities?
Example Entities
public class Document
{
public virtual int Id { get; set; }
public virtual string Name { get; set; }
public virtual Stage Stage { get; set; }
}
public class Stage
{
public virtual int Id { get; set; }
public virtual string Name { get; set; }
}
Use an enum
Replace your class Stage with an Enum
public enum Stage
{
Rejected, None, Approved, Etc
}
In your NHibernate mapping simplay add the Stage enum to your map
<property name="Stage"></property>
In your db you can simply create the Stage column to an int32 and Nhibernate will figure out how to persist and load the enum automagically.
The advantage of using an enum is that you can always cast the enum to an int and decrement or increment to get the previous or next stage (assuming that you are simply adding them in 0..N).
Stage nextStage = (Stage)(((int)currentDocument.Stage)++);
Stage previousStage = (Stage)(((int)currentDocument.Stage)--);
Otherwise you can use a linq query to get the previous or next steps.
Edit
So far in your requirements you haven't listed that you need anything of the complexity of a generic workflow. Here is a sample app which uses WWF with a document approval system similiar to what you require.
http://www.codeproject.com/KB/WF/wwf_basics_files.aspx
Until you actually need something of the WWF complexity. I would recommend that you use the enum and then refactor when your requirements change. This way you're not implementing a feature "just in case".
Hey all. I realize this is a rather long question, but I'd really appreciate any help from anyone experienced with RIA services. Thanks!
I'm working on a Silverlight 4 app that views data from the server. I'm relatively inexperienced with RIA Services, so have been working through the tasks of getting the data I need down to the client, but every new piece I add to the puzzle seems to be more and more problematic. I feel like I'm missing some basic concepts here, and it seems like I'm just 'hacking' pieces on, in time-consuming ways, each one breaking the previous ones as I try to add them. I'd love to get the feedback of developers experienced with RIA services, to figure out the intended way to do what I'm trying to do. Let me lay out what I'm trying to do:
First, the data. The source of this data is a variety of sources, primarily created by a shared library which reads data from our database, and exposes it as POCOs (Plain Old CLR Objects). I'm creating my own POCOs to represent the different types of data I need to pass between server and client.
DataA - This app is for viewing a certain type of data, lets call DataA, in near-realtime. Every 3 minutes, the client should pull data down from the server, of all the new DataA since the last time it requested data.
DataB - Users can view the DataA objects in the app, and may select one of them from the list, which displays additional details about that DataA. I'm bringing these extra details down from the server as DataB.
DataC - One of the things that DataB contains is a history of a couple important values over time. I'm calling each data point of this history a DataC object, and each DataB object contains many DataCs.
The Data Model - On the server side, I have a single DomainService:
[EnableClientAccess]
public class MyDomainService : DomainService
{
public IEnumerable<DataA> GetDataA(DateTime? startDate)
{
/*Pieces together the DataAs that have been created
since startDate, and returns them*/
}
public DataB GetDataB(int dataAID)
{
/*Looks up the extended info for that dataAID,
constructs a new DataB with that DataA's data,
plus the extended info (with multiple DataCs in a
List<DataC> property on the DataB), and returns it*/
}
//Not exactly sure why these are here, but I think it
//wouldn't compile without them for some reason? The data
//is entirely read-only, so I don't need to update.
public void UpdateDataA(DataA dataA)
{
throw new NotSupportedException();
}
public void UpdateDataB(DataB dataB)
{
throw new NotSupportedException();
}
}
The classes for DataA/B/C look like this:
[KnownType(typeof(DataB))]
public partial class DataA
{
[Key]
[DataMember]
public int DataAID { get; set; }
[DataMember]
public decimal MyDecimalA { get; set; }
[DataMember]
public string MyStringA { get; set; }
[DataMember]
public DataTime MyDateTimeA { get; set; }
}
public partial class DataB : DataA
{
[Key]
[DataMember]
public int DataAID { get; set; }
[DataMember]
public decimal MyDecimalB { get; set; }
[DataMember]
public string MyStringB { get; set; }
[Include] //I don't know which of these, if any, I need?
[Composition]
[Association("DataAToC","DataAID","DataAID")]
public List<DataC> DataCs { get; set; }
}
public partial class DataC
{
[Key]
[DataMember]
public int DataAID { get; set; }
[Key]
[DataMember]
public DateTime Timestamp { get; set; }
[DataMember]
public decimal MyHistoricDecimal { get; set; }
}
I guess a big question I have here is... Should I be using Entities instead of POCOs? Are my classes constructed correctly to be able to pass the data down correctly? Should I be using Invoke methods instead of Query (Get) methods on the DomainService?
On the client side, I'm having a number of issues. Surprisingly, one of my biggest ones has been threading. I didn't expect there to be so many threading issues with MyDomainContext. What I've learned is that you only seem to be able to create MyDomainContextObjects on the UI thread, all of the queries you can make are done asynchronously only, and that if you try to fake doing it synchronously by blocking the calling thread until the LoadOperation finishes, you have to do so on a background thread, since it uses the UI thread to make the query. So here's what I've got so far.
The app should display a stream of the DataA objects, spreading each 3min chunk of them over the next 3min (so they end up displayed 3min after the occurred, looking like a continuous stream, but only have to be downloaded in 3min bursts). To do this, the main form initializes, creates a private MyDomainContext, and starts up a background worker, which continuously loops in a while(true). On each loop, it checks if it has any DataAs left over to display. If so, it displays that Data, and Thread.Sleep()s until the next DataA is scheduled to be displayed. If it's out of data, it queries for more, using the following methods:
public DataA[] GetDataAs(DateTime? startDate)
{
_loadOperationGetDataACompletion = new AutoResetEvent(false);
LoadOperation<DataA> loadOperationGetDataA = null;
loadOperationGetDataA =
_context.Load(_context.GetDataAQuery(startDate),
System.ServiceModel.DomainServices.Client.LoadBehavior.RefreshCurrent, false);
loadOperationGetDataA.Completed += new
EventHandler(loadOperationGetDataA_Completed);
_loadOperationGetDataACompletion.WaitOne();
List<DataA> dataAs = new List<DataA>();
foreach (var dataA in loadOperationGetDataA.Entities)
dataAs.Add(dataA);
return dataAs.ToArray();
}
private static AutoResetEvent _loadOperationGetDataACompletion;
private static void loadOperationGetDataA_Completed(object sender, EventArgs e)
{
_loadOperationGetDataACompletion.Set();
}
Seems kind of clunky trying to force it into being synchronous, but since this already is on a background thread, I think this is OK? So far, everything actually works, as much of a hack as it seems like it may be. It's important to note that if I try to run that code on the UI thread, it locks, because it waits on the WaitOne() forever, locking the thread, so it can't make the Load request to the server.
So once the data is displayed, users can click on one as it goes by to fill a details pane with the full DataB data about that object. To do that, I have the the details pane user control subscribing to a selection event I have setup, which gets fired when the selection changes (on the UI thread). I use a similar technique there, to get the DataB object:
void SelectionService_SelectedDataAChanged(object sender, EventArgs e)
{
DataA dataA = /*Get the selected DataA*/;
MyDomainContext context = new MyDomainContext();
var loadOperationGetDataB =
context.Load(context.GetDataBQuery(dataA.DataAID),
System.ServiceModel.DomainServices.Client.LoadBehavior.RefreshCurrent, false);
loadOperationGetDataB.Completed += new
EventHandler(loadOperationGetDataB_Completed);
}
private void loadOperationGetDataB_Completed(object sender, EventArgs e)
{
this.DataContext =
((LoadOperation<DataB>)sender).Entities.SingleOrDefault();
}
Again, it seems kinda hacky, but it works... except on the DataB that it loads, the DataCs list is empty. I've tried all kinds of things there, and I don't see what I'm doing wrong to allow the DataCs to come down with the DataB. I'm about ready to make a 3rd query for the DataCs, but that's screaming even more hackiness to me.
It really feels like I'm fighting against the grain here, like I'm doing this in an entirely unintended way. If anyone could offer any assistance, and point out what I'm doing wrong here, I'd very much appreciate it!
Thanks!
I have to say it does seem a bit overly complex.
If you use the entity framework (which with version 4 can generate POCO's if you need) and Ria /LINQ You can do all of that implicitly using lazy loading, 'expand' statements and table per type inheritance.