I am creating a web application that manages documents. These documents have stages. Users will be able to reject these documents from the current stage back to the previous stage.
So the flow will be like this Document stage one approved > Get next stage and set document stage to next stage > Document stage one REJECTED > Get previous stage and set document stage to previous stage.
Now what I need help with is how to manage the stages back and forth and what is the best way to setup my entities?
Example Entities
public class Document
{
public virtual int Id { get; set; }
public virtual string Name { get; set; }
public virtual Stage Stage { get; set; }
}
public class Stage
{
public virtual int Id { get; set; }
public virtual string Name { get; set; }
}
Use an enum
Replace your class Stage with an Enum
public enum Stage
{
Rejected, None, Approved, Etc
}
In your NHibernate mapping simplay add the Stage enum to your map
<property name="Stage"></property>
In your db you can simply create the Stage column to an int32 and Nhibernate will figure out how to persist and load the enum automagically.
The advantage of using an enum is that you can always cast the enum to an int and decrement or increment to get the previous or next stage (assuming that you are simply adding them in 0..N).
Stage nextStage = (Stage)(((int)currentDocument.Stage)++);
Stage previousStage = (Stage)(((int)currentDocument.Stage)--);
Otherwise you can use a linq query to get the previous or next steps.
Edit
So far in your requirements you haven't listed that you need anything of the complexity of a generic workflow. Here is a sample app which uses WWF with a document approval system similiar to what you require.
http://www.codeproject.com/KB/WF/wwf_basics_files.aspx
Until you actually need something of the WWF complexity. I would recommend that you use the enum and then refactor when your requirements change. This way you're not implementing a feature "just in case".
Related
My project is an online foods order app, the key feature of this app is the "Daily nutrients intake monitor". This monitor shows the differences of daily intake recommendation values of 30 types of nutrients vs the actual nutrients contains from the foods in user's shoppingcart.
I created 30 models base on those nutrients and each one of them has an InputData which inherits from a base class - NutrientInputDataBase, below is the example of Added sugar InputData class and the base class:
public class AddedSugarUlInputData : NutrientInputDataBase
{
[ColumnName(#"AddedSugar-AMDR-UL")]
public float AddedSugar_AMDR_UL { get; set; }
}
public class NutrientInputDataBase
{
[ColumnName(#"Sex")]
public float Sex { get; set; }
[ColumnName(#"Age")]
public float Age { get; set; }
[ColumnName(#"Activity")]
public float Activity { get; set; }
[ColumnName(#"BMI")]
public float BMI { get; set; }
[ColumnName(#"Disease")]
public float Disease { get; set; }
}
From the official documents:
https://learn.microsoft.com/en-us/dotnet/machine-learning/how-to-guides/serve-model-web-api-ml-net
i understood that i need to create a 'PredictionEnginePool' and i already know how to register the PredictionEnginePool in the application startup file.
My app logic is when user added or removed an item from the shoppingcart, the front end will request the api, the backend will get the user profile first(to obtain the input data for the prediction), then return a packaged objects which contains all 30 types of nutrients prediction results.
My question is, should i register the PredictionEnginePool for each one of the nutrient model individually in the Startup file? or in anyother effecient way which i haven't be awared of?
There's multiple ways for you to go about it.
Register each of your models PredictionEnginePool. The FromFile and FromUri methods allow you to specify a name for each of your models so when you use them to make predictions in your application you can reference them by name.
Save your model to a database as a blob. Then you can add logic on your application to load a specific model based on the criteria you specify. The downside to this is you'd have to fetch your models more dynamically rather than having a PredictionEnginePool ready to go.
I have the following class that holds the start and end values of a task
public class WorkPackage
{
public int Id { get; set; }
public string Name { get; set; }
public DateTime FromTime { get; set; }
public DateTime ToTime { get; set; }
}
I also show the saved values as follows(grouped base on FromTime)
End user can
Move one WorkPackage from a month to another month by Drag & Drop.
Split one WorkPackage to two or more WorkPackages base on some rules.
So I have following methods:
public void MoveWorkPackageToMonth(WorkPackage wp, int month)
{
....
}
public List<WorkPackage> SplitWorkPackage(WorkPackage wp)
{
....
}
Each time the user makes a lot of changes to the WorkPackage list, but the WorkPackage list may be rebuilt every few days for business reasons, and user want to do same things on the re-created WorkPackage list, so I need to save the user's works in the database to repeat the same works on the re-created WorkPackage list.
I want to add something like a scripting language to save user works as a string, something like this:
"Move(WP1){From(January) To(April)};SPLIT(WP5);"
Is there any library to help me? or I have to define my own custom business language?(I used .Net4)
The NuGet Package ITVComponents.Scripting.CScript provides a scripting engine that supports interpretet as well as compiled code.
The Syntax is derived from JavaScript with some special extensions to enable dynamic type loading and native script parts.
Unfortunately there is no documentation at the moment, but i'm working on it..
Basically, for what you want to achive, you could use the following code:
ExpressionParser.ParseBlock(yourScript, yourObject, s => DefaultCallbacks.PrepareDefaultCallbacks(s.Scope, s.ReplSession));
where yourObject would be the objects that implement the methods MoveWorkPackageToMonth,
SplitWorkPackage, a method to find a Package (FindPackage?) and the months als integer Properties and your script would look something like
MoveWorkPackageToMonth(FindPackage("nameOfPackage1"), April); MoveWorkPackageToMonth(FindPackage("nameOfPackage2"), February); ...
this code will work interpreted, therefore, if you have plenty of actions to perform, it may be slow.
I am working on a service oriented architecture. I have 3 tables Meeting, Stakeholder and MeetingStakeholder (a junction table).
A simple representation of POCO classes for all 3 tables:
public class Meeting
{
public int Id { get; set; }
public IList<MeetingStakeholder> MeetingStakeholders { get; set; }
}
public class Stakeholder
{
public int Id { get; set; }
}
public class MeetingStakeholder
{
public int Id { get; set; }
public int MeetingId { get; set; }
public Meeting Meeting { get; set; }
public int StakeholderId { get; set; }
public Stakeholder Stakeholder { get; set; }
}
A simple representation of Meeting Dto:
public class MeetingDto
{
public int Id { get; set; }
public IList<int> StakeholderIds { get; set; }
}
In PUT action,
PUT: api/meetings/1
First I removes all existing records from MeetingStakeholder (junction table) then prepares new List<MeetingStakeholder> meetingStakeholders using meetingDto.StakeholderIds and create it.
{
List<MeetingStakeholder> existingMeetingStakeholders = _unitOfWork.MeetingStakeholderRepository.Where(x=> x.MeetingId == meetingDto.Id);
_unitOfWork.MeetingStakeholderRepository.RemoveRange(existingMeetingStakeholders);
List<MeetingStakeholder> meetingStakeholders = ... ;
_unitOfWork.MeetingRepository.Update(meeting);
_unitOfWork.MeetingStakeholderRepository.CreateRange(meetingStakeholders);
_unitOfWork.SaveChanges();
return OK(meetingDto);
}
Everything is fine to me. But my architect told me that i am doing wrong thing.
He said, in PUT action (according to SRP) I should not be removing and re-creating MeetingStakeholder records, I should be responsible for updating meeting object only.
According to him, MeetingStakeholderIds (array of integers) should be send in request body to these routes.
For assigning new stakeholders to meeting.
POST: api/meetings/1/stakeholders
For removing existing stakeholders from meeting.
Delete: api/meetings/1/stakeholders
But the problem is, In meeting edit screen my front-end developer uses multi-select for Stakeholders. He will need to maintain two Arrays of integers.
First Array for those stakeholders Ids which end-user unselect from multi-select.
Second Array for new newly selected stakeholders Ids.
Then he will send these two arrays to their respective routes as I mentioned above.
If my architect is right then I have no problem but how should my front-end developer handle stakeholders selection in edit screen?
One thing I want to clarify that my junction table is very simple, it does not contain additional columns other than MeetingId and StakeholderId ( a very basic junction). So in this scenario, does it make sense to create separate POST/DELETE actions on "api/meetings/1/stakeholders" that receives StakeholderIds (list of integers) instead of receiving StakeholderIds directly in MeetingDto??
First of all, if I am not mistaken:
you have a resource: "Meeting";
you want to update the said resource (using HTTP/PUT).
So updating a meeting by requesting a PUT on "/api/meetings/:id" seems fairly simple, concise, direct and clear. All good traits for designing a good interface. And it still respects the Single Responsibility Principle: You are updating a resource"
Nonetheless, I also agree with you architect in providing, in addition to the previous method, POST/Delete actions on "api/meetings/1/stakeholders" if the requisites justify so. We should be pragmatic at some level not to overengineer something that isn't required to.
Now if your architect just said that because of HOW IT IS PERSISTED, then he is wrong. Interfaces should be clear to the end user (frontend today, another service or app tomorrow ...), but most importantly, in this case, ignorant of its persistence or any implementation for that matter.
Your api should focus on your domain and your business rules, not on how you store the information.
This is just my view. If someone does not agree with me I would like to be called out and so both could grow and learn together.
:) Hope I Could be of some help. Cheers
i'm writing a system to track observation values from sensors (e.g. temperature, wind direction and speed) at different sites. I'm writing it in C# (within VS2015) using a code-first approach. Although i've a reasonable amount of programming experience, I'm relatively new to C# and the code-first approach.
I've defined my classes as below. I've built a REST api to accept observation reading through Post, which has driven my desire to have Sensor keyed by a string rather than an integer - Some sensors have their own unique identifier built in. Otherwise, i'm trying to follow the Microsoft Contoso university example (instructors - courses- enrolments).
What I am trying to achieve is a page for a specific site with a list of the sensors at the site, and their readings. Eventually this page will present the data in graphical form. But for now, i'm just after the raw data.
public class Site
{
public int Id { get; set; }
public string Name { get; set; }
public ICollection<Sensor> Sensors { get; set; }
}
public class Sensor
{
[Key]
public string SensorName { get; set; }
public int SensorTypeId { get; set; }
public int SiteId { get; set; }
public ICollection<Observation> Observations { get; set; }
}
public class Observation
{
public int Id { get; set; }
public string SensorName { get; set; }
public float ObsValue { get; set; }
public DateTime ObsDateTime { get; set; }
}
and I've created a View Model for the page I'm going to use...
public class SiteDataViewModel
{
public Site Site { get; set; }
public IEnumerable<Sensor> Sensors { get; set;}
public IEnumerable<Observation> Observations { get; set; }
}
and then i try to join up the 3 classes into that View Model in the SiteController.cs...
public actionresult Details()
var viewModel.Site = _context.Sites
.Include(i => i.Sensors.select(c => c.Observations));
i used to get an error about "cannot convert lambda expression to type string", but then I included "using System.Data.Entity;" and the error has changed to two errors... on the 'include', I get "cannot resolve method 'include(lambda expression)'...". And on the 'select' i get "Icollection does not include a definition for select..."
There's probably all sorts of nastiness going on, but if someone could explain where the errors are (and more importantly why they are errors), then I'd be extremely grateful.
Simply you can you use like
viewModel.Site = _context.Sites
.Include("Sensors).Include("Sensors.Observations");
Hope this helps.
The way your ViewModel is setup, you're going to have 3 unrelated sets of data. Sites, sensors, and observations. Sites will have no inherent relation to sensors -- you'll have to manually match them on the foreign key. Realistically, your ViewModel should just be a list of Sites. You want to do
#Model.Sites[0].Sensors[0].Observations[0]
not something convoluted like
var site = #Model.Sites[0]; var sensor = #Model.Sensors.Where(s => SiteId == site.Id).Single(); etc...
Try doing
viewModel.Site = _context.Sites.Include("Sensors.Observations").ToList();
Eager-loading multiple levels of EF Relations can be accomplished in just one line.
One of the errors you reported receiving, by the way, is because you're using 'select' instead of 'Select'
And lastly, be aware that eager-loading like this can produce a huge amount of in-memory data. Consider splitting up your calls for each relation, such that you display a list of Sensors, and clicking, say, a dropdown will call an API that retrieves a list of Sites, etc. This is a bit more streamlined, and it prevents you from getting held up because your page is loading so much information.
Update
I've created a sample application for you that you can browse and look through. Data is populated in the Startup.Configure method, and retrieved in the About.cshtml.cs file and the About.cshtml page.. This produces this page, which is what you're looking for I believe.
I am trying to refactor a solution to bring on board another project.
I have a Core project where common classes across projects reside.
I've tried to simpify my question by using 2 imaginary projects: Holidays and Weather...
I have a file load process setup for the Holidays project which has the following 2 classes:
public class Job
{
public virtual string CreatedBy { get; set; }
public virtual DateTime? CreatedDate { get; set; }
public virtual Security Security { get; set; }
protected IList<File> _files = new List<File>();
public virtual IEnumerable<File> Files
{
get { return _files; }
}
}
public class File
{
public virtual string FileName { get; set; }
public virtual FileType FileType { get; set; }
public virtual FileStatusType FileStatusType { get; set; }
public virtual Job Job { get; set; }
}
The file load process for the Weather project has exactly the same structure as Holidays, except that the Jobs class does not have a Security property.
My question is, is it possible to somehow move both classes into the Core project to allow both projects to use them?
Obviously Weather does not need the Security property, so I was thinking I would have a Core.Job class without Security, and then extend the Core.Job in Holidays.Job.
But once I do that, in the Core.File class, what Job is it referring to? As it sits in the Core project it must be the Core.Job.
So would I then need to have Job and File sit in Holidays, and Weather (and any other future projects) use the Core.Job and Core.File?
I don't want the Core project to have any references to sub projects.
I am using NHibernate, and so have mapping files - adding to the complexity.
Hope this is clear enough
Thanks
You can certainly do this, but I am not sure whether it brings you true benefit:
Does the Core itself work with the base Job in any way? If it does not, implementing Job separately in each project may help you keep coupling loose, even though I'd a little redundant. In code I wrote, I have sometimes introduced unnecessary dependencies by extracting interfaces without adding true benefit. This is why I am a bit precautious.
In case Core does acutal work with it, the part to refactor into the common base Job is perhaps the interface it works with.
You may think of an interface instead of a base class. Security may semantically belong to another interface. Moreover, you hand over a lot of control over your classes to the Core.
Do you ever hand a job from one project to another (or are they mapped to the same DB table via NHibernate?)? If you don't, an internal redundant class may be fine too.
Not very clear why confuse on the soluton offered by you (assuming that I right understood you)
//Core DLL
public class Job
{
public virtual string CreatedBy { get; set; }
public virtual DateTime? CreatedDate { get; set; }
protected IList<File> _files = new List<File>();
public virtual IEnumerable<File> Files
{
get { return _files; }
}
}
in the Hollidays you have
public class HollidayJob : Job
{
public virtual Security Security { get; set; }
}
in Weather simply use a type Job, if it selfsufficient.
In this case you refer CoreDLL from Holliday project and Weather. When you serialize it via NHibernate it for HollidayJob save one field more, but when Weather reads the same table it skips that field, as don't know anything, and don't actually care abotu it.
Hope this helps.